On the Adversarial Robustness of Robust Estimators
From MaRDI portal
Publication:5124489
DOI10.1109/TIT.2020.2985966zbMATH Open1446.62103arXiv1806.03801OpenAlexW3015906298MaRDI QIDQ5124489FDOQ5124489
Authors: Lifeng Lai, Erhan Bayraktar
Publication date: 29 September 2020
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Abstract: Motivated by recent data analytics applications, we study the adversarial robustness of robust estimators. Instead of assuming that only a fraction of the data points are outliers as considered in the classic robust estimation setup, in this paper, we consider an adversarial setup in which an attacker can observe the whole dataset and can modify all data samples in an adversarial manner so as to maximize the estimation error caused by his attack. We characterize the attacker's optimal attack strategy, and further introduce adversarial influence function (AIF) to quantify an estimator's sensitivity to such adversarial attacks. We provide an approach to characterize AIF for any given robust estimator, and then design optimal estimator that minimizes AIF, which implies it is least sensitive to adversarial attacks and hence is most robust against adversarial attacks. From this characterization, we identify a tradeoff between AIF (i.e., robustness against adversarial attack) and influence function, a quantity used in classic robust estimators to measure robustness against outliers, and design estimators that strike a desirable tradeoff between these two quantities.
Full work available at URL: https://arxiv.org/abs/1806.03801
Robustness and adaptive procedures (parametric inference) (62F35) Nonconvex programming, global optimization (90C26)
Cited In (5)
- Dynamic Cheap Talk for Robust Adversarial Learning
- Precise statistical analysis of classification accuracies for adversarial training
- Title not available (Why is that?)
- Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model
- The curse of overparametrization in adversarial training: precise analysis of robust generalization for random features regression
This page was built for publication: On the Adversarial Robustness of Robust Estimators
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5124489)