Indicator of Benignity: An Industry View of False Positive in Malicious Domain Detection and its Mitigation
Daiping Liu
Network and Distributed System Security (NDSS) Symposium 2026 · Day 1 · Distributed Computation
For decades, cybersecurity has focused almost exclusively on hunting for bad indicators (IOCs). This talk flips that paradigm with a deceptively powerful concept: **Indicators of Benignity (IOBs)** -- proactive identification of evidence that a flagged domain is actually legitimate. Using a **six-year dataset from Palo Alto Networks** covering over **65,000 organizations** and **7 billion DNS queries per day**, the researchers reveal that of approximately **123,000 user-reported potential false positives**, a staggering **98% were confirmed as true false positives**. Traditional popularity-based allow lists like Tranco could only catch **38%** of these, but the researchers' **IOB Hunter** system -- combining LLM-powered web content analysis with a **transitive trust model** -- is over **700 times more effective** than traditional allow lists and was deployed in production where it confirmed **4,300 false positives** in two months.
AI review
A production-grounded approach to false positive mitigation that introduces the IOB (Indicator of Benignity) concept and deploys it at Palo Alto Networks' scale. The transitive trust model is a clean formalization, the 99% precision is impressive, and the 700x improvement over Tranco is a compelling statistic. However, this is fundamentally a blue team tool for FP reduction, not an offensive technique or novel vulnerability discovery.