Black-hat LLMs
Nicholas Carlini
[un]prompted 2026 — AI Security Practitioner Conference · Day 1 · 1
Nicholas Carlini, a research scientist at Anthropic and one of the most respected voices in AI security, delivered a talk that the conference organizer introduced as a "global-level emergency." Using minimal scaffolding — a single Claude command-line invocation — Carlini and colleagues have found the first critical vulnerability in the Ghost CMS (a 20-year-old SQL injection), multiple remotely exploitable heap buffer overflows in the Linux kernel, and hundreds of additional potential bugs he has not yet had time to validate. His message: LLMs have become the most significant development in security since the internet, and the curve is not bending yet. ---
AI review
Carlini just told you that LLMs found the first critical Ghost CMS vulnerability in 20 years, a 2003-vintage Linux kernel NFSv4 heap buffer overflow requiring two cooperating adversaries, and hundreds more unvalidated kernel bugs — with a single command-line prompt, no fancy scaffolding. If you're not treating this as a five-alarm fire, you're not paying attention.