Trust Me, I Know This Function: Hijacking LLM Static Analysis using Bias
Shir Bernstein
Network and Distributed System Security (NDSS) Symposium 2026 · Day 1 · Systems Security
This talk presents a novel attack class called **Familiar Pattern Attacks (FPAs)** that exploits a fundamental weakness in how LLMs analyze code: **abstraction bias**. When LLMs encounter code patterns they have seen thousands of times during pre-training (like calculating the nth prime number), they skip deep reasoning and instead retrieve high-level semantic templates from memory. By embedding small, deterministic bugs in these familiar patterns, an adversary can make the LLM's interpretation of code diverge from its actual runtime behavior -- effectively hijacking LLM-based static analysis, code review, and vulnerability detection.
AI review
A genuinely novel attack class that exploits a fundamental structural weakness in LLM code analysis. Familiar Pattern Attacks are cheap, stealthy, transferable across models and languages, and achieve 97% success against Cursor and GitHub Copilot. The finding that reasoning models generate stronger attacks rather than defending against them is devastating. This has immediate implications for supply chain security, code review automation, and anyone relying on LLMs to analyze untrusted code.