Preempt: Sanitizing Sensitive Prompts for LLMs

Amrita Roy Chowdhury

Network and Distributed System Security (NDSS) Symposium 2026 · Day 3 · Privacy & Measurement

Preempt is a **prompt sanitization system** that protects sensitive information in LLM prompts while preserving utility. It targets **prompt-invariant tasks** (translation, RAG, financial advice) where the LLM's response doesn't depend on exact sensitive values. The system uses two complementary techniques: **Format-Preserving Encryption (FPE)** for format-dependent tokens (names, SSNs, credit card numbers) and **Metric Local Differential Privacy (MLDP)** for value-dependent tokens (age, salary, balances). Preempt is **stateless** (only requires a secret key, no lookup tables), making it GDPR/CCPA compliant by construction.

AI review

A well-engineered prompt sanitization system combining FPE and metric DP that achieves near-zero utility loss on prompt-invariant tasks. The formal privacy guarantees and stateless design are clean contributions, and the finding that format preservation is critical for LLM processing is practically valuable. However, the restriction to invariant tasks limits applicability, and the open problems (token dependencies, context-emergent sensitivity) are significant gaps.

Watch on YouTube