Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Shawn Shan, Wenxin Ding, Josephine Passananti, Stanley Wu, Haitao Zheng, Ben Y. Zhao
IEEE Symposium on Security and Privacy 2024 · Day 1 · Continental Ballroom 5
This talk introduces **Nightshade**, a novel data poisoning attack designed to protect copyrighted content from unauthorized use in **text-to-image generative AI models**. Presented by Shawn Shan and his co-authors, Nightshade addresses the growing challenge faced by content creators—from gaming companies and animation studios to fashion designers—whose intellectual property is routinely scraped from the internet and incorporated into massive AI training datasets without consent or compensation. The core problem lies in the ease with which these models can replicate, modify, and even generate new content heavily inspired by existing copyrighted works, potentially undermining revenue streams and brand integrity.
AI review
This talk introduces Nightshade, a groundbreaking data poisoning attack that weaponizes subtle image perturbations to corrupt text-to-image AI models. By exploiting data sparsity and noisiness, it demonstrates how a minimal number of poisoned samples can drastically alter model outputs, creating a powerful deterrent against unauthorized data scraping. This work fundamentally shifts the power balance in favor of content creators, forcing AI developers to rethink data acquisition ethics.