Securing Workspace GenAI at Google Speed: Surviving the Perfect Storm

Nicolas Lidzborski

[un]prompted 2026 — AI Security Practitioner Conference · Day 2 · 1

The generative AI era has collapsed the traditional distinction between code and data — making every token in an LLM's context a potential instruction and rendering reactive filtering fundamentally futile. Nicolas Lidzborski, a 25-year security veteran who has spent three years securing Gemini in Google Workspace, presents a four-layer structural blueprint for building AI defenses that actually hold: low-risk input preparation, context hardening, deterministic orchestration, and output sanitization. The talk ends with a concrete demonstration of why the cat-and-mouse approach to prompt filtering is a game defenders cannot win. ---

AI review

The most technically rigorous treatment of the prompt injection defense problem I've seen from a production environment at scale. Lidzborski spent three years securing Gemini across Google Workspace, and the four-layer blueprint — input preparation, context hardening, deterministic orchestration, output sanitization — comes from fighting real adversaries, not building a slide deck.

Watch on YouTube