Protecting classifiers from attacks
From MaRDI portal
Publication:6579153
DOI10.1214/24-STS922MaRDI QIDQ6579153FDOQ6579153
Authors: Roi Naveiro, Alberto Redondo, David Ríos Insua, Fabrizio Ruggeri
Publication date: 25 July 2024
Published in: Statistical Science (Search for Journal in Brave)
Cites Work
- Title not available (Why is that?)
- Random forests
- Pattern recognition and machine learning.
- Title not available (Why is that?)
- Constructing Summary Statistics for Approximate Bayesian Computation: Semi-Automatic Approximate Bayesian Computation
- Inference for Stereological Extremes
- A Stochastic Approximation Method
- Deep learning
- Robust Bayesian inference via coarsening
- Robust Bayesian analysis
- On players' models of other players: Theory and experimental evidence
- Adversarial risk analysis
- The Elements of Statistical Learning
- Large-scale machine learning with stochastic gradient descent
- Adversarial machine learning
- Estimating classification error rate: repeated cross-validation, repeated hold-out and bootstrap
- Adversarial classification: an adversarial risk analysis approach
- Generalized accept-reject sampling schemes
- Expert judgement in risk and decision analysis
- Augmented probability simulation methods for sequential games
- Adversarial risk analysis: the Somali pirates case
- Approximating Bayes in the 21st century
- Some statistical challenges in automated driving systems
Cited In (1)
This page was built for publication: Protecting classifiers from attacks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6579153)