CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers

Matan Ben-Tov, Daniel Deutch, Nave Frost, Mahmood Sharif

IEEE Symposium on Security and Privacy 2024 · Day 1 · Continental Ballroom 5

This article delves into CaFA, a novel framework for generating **cost-aware, feasible adversarial attacks** against neural tabular classifiers. Presented at IEEE S&P, this research by Matan Ben-Tov, Daniel Deutch, Nave Frost, and Mahmood Sharif addresses a critical gap in machine learning security: the difficulty of creating realistic and implementable evasion attacks in the tabular data domain. Unlike image data, where subtle pixel changes can fool models, tabular data features often have complex interdependencies and discrete values, making naive perturbations nonsensical and non-realizable in the real world.

AI review

CaFA delivers a critical and genuinely novel framework for generating feasible, cost-aware adversarial attacks on tabular neural classifiers. It addresses a fundamental flaw in prior tabular attack methodologies by automatically integrating database integrity constraints and demonstrating real-world applicability against phishing models. This is precisely the kind of deep, impactful research we need.

Watch on YouTube