Models and Systems: How to Think About Vulnerabilities and Artificial Intelligence
CVE/FIRST VulnCon 2025 · Main Stage
In this insightful talk at VulnCon, Eric O'Lincoln delves into the critical distinction between vulnerabilities found in Artificial Intelligence *models* and those residing within the broader *systems* that integrate them. As Large Language Models (LLMs) and other generative AI technologies witness unprecedented adoption across enterprises, the security community faces a pressing need to understand their unique attack surfaces and how traditional vulnerability management paradigms, such as the Common Vulnerabilities and Exposures (CVE) program, apply to this rapidly evolving landscape. O'Lincoln's presentation aims to clarify common misconceptions, provide a framework for identifying and categorizing AI-related weaknesses, and guide security professionals on where to focus their defensive efforts.
AI review
O'Lincoln delivers a competent, well-structured taxonomy talk on AI vulnerability classification — essentially a framework session for practitioners trying to apply CVE/CWE thinking to LLM deployments. The core argument (most exploitable AI vulns live at the system layer, not the model layer) is correct and useful, and the CVE assignability guidance has genuine operational value for security teams drowning in AI hype. This isn't research — it's applied vulnerability classification methodology — and judged in that lane it's solid work. It won't blow anyone's hair back, but it'll help a…