Vulnerability of Text-Matching in ML/AI Conference Reviewer Assignments to Collusions

Jhih-Yi (Janet) Hsieh

34th USENIX Security Symposium (USENIX Security '25) · Day 3 · Social Issues and Security

The integrity of the peer-review process is a cornerstone of scientific advancement, especially in rapidly evolving fields like Artificial Intelligence and Machine Learning. As these conferences scale to unprecedented sizes, the reliance on automated systems for reviewer assignments becomes essential. This talk, presented by Jhih-Yi (Janet) Hsieh, delves into a critical vulnerability within these automated systems: the susceptibility of text-matching algorithms to collusive manipulation. Historically, while reviewer bidding mechanisms were recognized as exploitable, the underlying text-matching components, which assess the thematic similarity between submissions and reviewers' past work, were often implicitly assumed to be robust against such attacks.

AI review

Solid academic security research that correctly identifies and validates a real vulnerability in automated peer-review assignment systems. The attack is well-constructed and the defenses have seen real-world adoption, but this is a niche problem with a narrow threat model that won't move the needle for most security conference attendees.

Watch on YouTube