Combating Generative AI's Privacy Abuses

BSidesSF 2024 · Day 1

This article delves into the critical privacy, security, and ethical challenges posed by the rapid proliferation of Generative AI (GenAI) and Large Language Models (LLMs). Presented as a panel discussion at BSidesSF 2024, the talk, moderated by Trisha, brought together a diverse group of experts: Aura Deshpande, Muhammad Tay, Nandita Rao Narla, and Raji Vamanan. The panelists explored the "astronomical growth" of GenAI, highlighting its pervasive influence across industries, from booking airline tickets to writing user stories. The core focus was on proactively identifying and addressing potential privacy violations and abuses before they become widespread.

AI review

This panel provided a robust overview of critical privacy abuses in generative AI, from data memorization and inference attacks to the complexities of hallucinations leading to legal exposure. The discussion moved beyond theoretical threats to practical mitigation strategies, including data hygiene, privacy-enhancing technologies like FHE (and its current limitations), and the crucial role of regulatory frameworks. While not a deep dive into a novel exploit, the collective expertise offered a highly actionable and realistic assessment of GenAI's privacy landscape for practitioners.

Watch on YouTube