Adversarial classification: an adversarial risk analysis approach
From MaRDI portal
Publication:2302772
Abstract: Classification problems in security settings are usually contemplated as confrontations in which one or more adversaries try to fool a classifier to obtain a benefit. Most approaches to such adversarial classification problems have focused on game theoretical ideas with strong underlying common knowledge assumptions, which are actually not realistic in security domains. We provide an alternative framework to such problem based on adversarial risk analysis, which we illustrate with several examples. Computational and implementation issues are discussed.
Recommendations
- Adversarial classification using signaling games with an application to phishing detection
- Machine learning in adversarial environments
- Scalable Optimal Classifiers for Adversarial Settings Under Uncertainty
- Classifier evaluation and attribute selection against active adversaries
- Adversarial risk analysis
Cites work
- scientific article; zbMATH DE number 6114075 (Why is no real title available?)
- scientific article; zbMATH DE number 6775935 (Why is no real title available?)
- Adversarial risk analysis
- Bayesian Classification of Tumours by Using Gene Expression Data
- Classifier evaluation and attribute selection against active adversaries
- Computer age statistical inference. Algorithms, evidence, and data science
- Decision analysis by augmented probability simulation
- Estimating classification error rate: repeated cross-validation, repeated hold-out and bootstrap
- Information enhancement -- a tool for approximate representation of optimal strategies from influence diagrams
- Multi-agent influence diagrams for representing and solving games.
- Network games: theory, models, and dynamics
- Pattern recognition and machine learning.
- Regression Metamodels for Simulation with Common Random Numbers: Comparison of Validation Tests and Confidence Intervals
- Safe and Effective Importance Sampling
Cited in
(27)- \textsc{Treant}: training evasion-aware decision trees
- Adversarial risk analysis: an overview
- Adversarial Risk Analysis for Auctions Using Mirror Equilibrium and Bayes Nash Equilibrium
- Classifier evaluation and attribute selection against active adversaries
- Adversarial vulnerability bounds for Gaussian process classification
- Gradient methods for solving Stackelberg games
- Adversarial classification using signaling games with an application to phishing detection
- When Should You Defend Your Classifier?
- Mining adversarial patterns via regularized loss minimization
- Editorial. Special issue on robustness in probabilistic graphical models
- Protecting classifiers from attacks
- Health care fraud classifiers in practice
- A statistician teaches deep learning
- Decision support issues in automated driving systems
- An Adversarial Risk Analysis Framework for Batch Acceptance Problems
- Analysis of classifiers' robustness to adversarial perturbations
- Theoretical foundations of adversarial binary detection
- Intentional control of type I error over unconscious data distortion: a Neyman-Pearson approach to text classification
- Data poisoning against information-theoretic feature selection
- Adversarial risk analysis for first‐price sealed‐bid auctions
- Adversarial Machine Learning: Bayesian Perspectives
- Query strategies for evading convex-inducing classifiers
- The security of machine learning
- Machine learning in adversarial environments
- On global robustness of an adversarial risk analysis solution
- Scalable Optimal Classifiers for Adversarial Settings Under Uncertainty
- Manipulating hidden-Markov-model inferences by corrupting batch data
This page was built for publication: Adversarial classification: an adversarial risk analysis approach
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2302772)