Everything is Good for Something: Counterexample-Guided Directed Fuzzing via Likely Invariant Inference
Heqing Huang, Anshunkang Zhou, Mathias Payer, Charles Zhang
IEEE Symposium on Security and Privacy 2024 · Day 2 · Continental Ballroom 4
In an era where software underpins nearly every facet of modern society, the prevalence and potential impact of software bugs have escalated dramatically. From critical infrastructure to personal devices, vulnerabilities can lead to severe consequences, including financial loss and even threats to human life. The challenge of securing software is compounded by its ever-increasing scale and complexity; projects like the Linux kernel and Chrome browser boast tens of millions of lines of code, while hardware-reliant systems like Tesla vehicles exceed hundreds of millions. Detecting vulnerabilities in such vast codebases is akin to finding a needle in a haystack, a task made even more daunting by the explosive number of execution paths and intricate path conditions. This talk introduces a novel approach to directed fuzzing, named **Hollow**, which significantly enhances the efficiency of bug detection by addressing a fundamental limitation in existing directed fuzzing techniques: the indirect generation of inputs.
AI review
Hollow introduces a critical advancement in directed fuzzing, tackling the persistent issue of inefficient bug triggering through counterexample-guided invariant inference. Its novel approach not only dramatically accelerates vulnerability discovery but has already unearthed 10 incomplete CVE fixes, demonstrating profound real-world impact. This research redefines the state-of-the-art for finding elusive, hard-to-trigger vulnerabilities.