Security Considerations for Services Using AI Models
Shrey Bagga
BSidesSF 2024 · Day 1
This talk, presented by Shrey Bagga at BSidesSF 2024, delves into the critical security considerations for services leveraging Artificial Intelligence (AI) models. As AI and Large Language Models (LLMs) become increasingly ubiquitous in both personal and organizational contexts—from self-driving cars to enterprise applications—understanding and mitigating their unique security risks is paramount. Bagga, a Product Security Engineer at AppDynamics (part of Cisco Systems), highlights how AI engineering introduces new attack surfaces and challenges that extend beyond traditional software security paradigms.
AI review
This talk provides a solid overview of critical security considerations for services leveraging AI models. It covers a range of attack vectors from input manipulation to supply chain issues, and then presents practical mitigation strategies like AI Bill of Materials and a Secure AI Development Life Cycle. While not groundbreaking zero-day research, it's a well-structured and technically sound synthesis of current threats and defenses, making it highly valuable for anyone building or securing AI systems.