Unsafe LLM-Based Search: Quantitative Analysis and Mitigation of Safety Risks in AI Web Search
Zeren Luo
34th USENIX Security Symposium (USENIX Security '25) · Day 3 · Vulnerabilities in LLMs: Privacy, Safety, and Defense
The advent of AI-powered web search marks a significant paradigm shift from traditional information retrieval, moving beyond pages of "blue links" to direct, synthesized solutions tailored to user queries. This talk, presented by Zeren Luo from the Hong Kong University of Science and Technology Guangzhou, delves into the critical security implications of this transformation. While AI search promises unparalleled convenience by assembling answers and even executable code, this very power introduces substantial new risks. Users are increasingly conditioned to trust the confident, direct responses provided by these systems, making them highly susceptible to malicious content inadvertently promoted by the AI.
AI review
Competent, well-structured empirical work that quantifies a real and underappreciated attack surface — LLM-based search as a malicious content amplifier. The threat model is sound, the three-tier risk taxonomy is a genuine contribution, and the vendor-intervention result is the most interesting data point in the talk. But the defenses are thin and the case studies, while illustrative, feel like proof-of-concept demos rather than adversarial research that stress-tests the space.