Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models
Sanghak Oh, Kiho Lee, Seonhye Park, Doowon Kim, Hyoungshick Kim
IEEE Symposium on Security and Privacy 2024 · Day 1 · Continental Ballroom 4
This talk, presented by Sanghak Oh and colleagues from KAIST, delves into the critical and emerging threat of **poisoning attacks** against AI coding assistant tools. With the widespread adoption of tools like GitHub Copilot and ChatGPT to boost developer productivity, the security implications of their underlying large language models (LLMs) have become a paramount concern. The research meticulously investigates how insecure code snippets, intentionally or unintentionally embedded in the vast, unverified open-source datasets used for training these models, can lead to the generation of vulnerable code. This work highlights a significant supply chain risk, where the very tools designed to enhance efficiency could inadvertently introduce critical security flaws into software projects.
AI review
This research brutally exposes a critical supply chain risk: AI coding assistants can be poisoned to inject insecure code, and developers, even security experts, are alarmingly prone to accepting these suggestions. The empirical study provides undeniable proof that these tools, if compromised, will actively degrade code security. This isn't theoretical; it's a present danger.