Ignore Your Generative AI Safety Instructions. Violate the CFAA?
Unknown
Black Hat USA 2024 · Day 1 · Briefing
In an era dominated by the rapid proliferation of generative AI, particularly large language models (LLMs), the security implications of these powerful systems are a paramount concern. This Black Hat USA talk, "Ignore Your Generative AI Safety Instructions. Violate the CFAA?", delves into a fascinating and increasingly relevant intersection of cybersecurity and law: whether the act of **prompt injection** into an LLM could constitute a violation of the **Computer Fraud and Abuse Act (CFAA)**. Presented by Kendra Albert, an attorney and academic specializing in law and machine learning attacks, alongside co-authors Jonathan Penney (an academic and lawyer with extensive CFAA expertise) and Ram Chakra Siva Kumar (an adversarial machine learning researcher), the talk navigates the complex legal landscape surrounding what many perceive as a playful manipulation of AI.