The Parseltongue Protocol: A Deep Dive into 100+ Textual Obfuscation Methods
Joey Melo
[un]prompted 2026 — AI Security Practitioner Conference · Day 2 · 2
CrowdStrike researchers systematically tested over 100 textual obfuscation methods against nine state-of-the-art AI models using more than 17,000 unique prompts. Their findings: 82% of obfuscation methods succeeded at least once against at least one model, Base64 was the single most effective method despite being obvious, and counterintuitively, providing less context to the model makes attacks more successful. Defenders need input-layer filtering, not just output-layer guardrails. ---
AI review
17,000+ unique prompts, 9 models, 100+ obfuscation methods, and the counterintuitive finding that less context makes attacks more successful. CrowdStrike did the systematic empirical work nobody else had done, and the taxonomy is immediately actionable for defenders building input filtering layers.