On the robustness of randomized classifiers to adversarial examples
From MaRDI portal
Publication:2102396
Cites work
- scientific article; zbMATH DE number 3173999 (Why is no real title available?)
- scientific article; zbMATH DE number 107482 (Why is no real title available?)
- scientific article; zbMATH DE number 1909499 (Why is no real title available?)
- 10.1162/153244303321897690
- Asymptotic Statistics
- Computational optimal transport. With applications to data sciences
- Foundations of machine learning
- GGHLite: more efficient multilinear maps from ideal lattices
- Learning in the Presence of Malicious Errors
- Noise-Enhanced Performance for an Optimal Bayesian Estimator
- Note on discrimination information and variation (Corresp.)
- On Choosing and Bounding Probability Metrics
- On Pinsker's and Vajda's Type Inequalities for Csiszár's $f$-Divergences
- Robust optimization
- Robustness and generalization
- Rényi Divergence and Kullback-Leibler Divergence
- Stochastic resonance in discrete time nonlinear AR(1) models.
- The Bayesian Choice
- Toward efficient agnostic learning
- Understanding machine learning. From theory to algorithms
Cited in
(3)
This page was built for publication: On the robustness of randomized classifiers to adversarial examples
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2102396)