Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models
Sanghak Oh, Kiho Lee, Seonhye Park, Doowon Kim, Hyoungshick Kim
IEEE Symposium on Security and Privacy 2024 · Day 1 · Continental Ballroom 4
In an era where AI coding assistants are rapidly becoming indispensable tools for software developers, this talk from IEEE S&P presents a critical examination of their inherent security risks. Titled "Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models," the research by Sanghak Oh and colleagues from San University delves into the alarming efficacy of **poisoning attacks** against these AI models. The core premise is that by injecting malicious, insecure code snippets into the vast datasets used to train these models, attackers can manipulate AI assistants to generate vulnerable code, directly impacting the security posture of newly developed software.
AI review
This research exposes a critical new attack vector: poisoning AI coding assistants to inject severe vulnerabilities directly into developer workflows. The study rigorously demonstrates how developers, even experts, readily adopt insecure, AI-generated code, fundamentally shifting how we must approach secure development and AI model training.