Protecting classifiers from attacks
From MaRDI portal
Publication:6579153
Cites work
- scientific article; zbMATH DE number 3954047 (Why is no real title available?)
- scientific article; zbMATH DE number 47310 (Why is no real title available?)
- A Stochastic Approximation Method
- Adversarial classification: an adversarial risk analysis approach
- Adversarial machine learning
- Adversarial risk analysis
- Adversarial risk analysis: the Somali pirates case
- Approximating Bayes in the 21st century
- Augmented probability simulation methods for sequential games
- Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation. With discussion and authors' reply
- Deep learning
- Estimating classification error rate: repeated cross-validation, repeated hold-out and bootstrap
- Expert judgement in risk and decision analysis
- Generalized accept-reject sampling schemes
- Inference for Stereological Extremes
- Large-scale machine learning with stochastic gradient descent
- On players' models of other players: Theory and experimental evidence
- Pattern recognition and machine learning.
- Random forests
- Robust Bayesian analysis
- Robust Bayesian inference via coarsening
- Some statistical challenges in automated driving systems
- The Elements of Statistical Learning
This page was built for publication: Protecting classifiers from attacks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6579153)