Adversarial learning via probabilistic proximity analysis
Published in IEEE International Conference on Acoustics, Speech and Signal Processing, 2021
Recommended citation: J. Hollis, J. Kim and R. Raich, "Adversarial Learning via Probabilistic Proximity Analysis," ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 2021, pp. 3830-3834, doi: 10.1109/ICASSP39728.2021.9414096. https://ieeexplore.ieee.org/abstract/document/9414096/
We consider the problem of designing a robust classifier in the presence of an adversary who aims to degrade classification performance by elaborately falsifying the test instance. We propose a model-agnostic defense approach wherein the true class label of the falsified instance is inferred by analyzing its proximity to each class as measured based on class-conditional data distributions. We present a k-nearest neighbors type approach to perform a sample-based approximation of the aforementioned probabilistic proximity analysis. The proposed approach is evaluated on three different real-world datasets in a game-theoretic setting, in which the adversary is assumed to optimize the attack design against the employed defense approach. In the game-theoretic evaluation, the proposed defense approach significantly outperforms benchmarks in various attack scenarios, demonstrating its efficacy against optimally designed attacks.
Recommended citation: J. Hollis, J. Kim and R. Raich, “Adversarial Learning via Probabilistic Proximity Analysis,” ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 2021, pp. 3830-3834, doi: 10.1109/ICASSP39728.2021.9414096.