Pages that link to "Item:Q1640392"
From MaRDI portal
The following pages link to Analysis of classifiers' robustness to adversarial perturbations (Q1640392):
Displaying 15 items.
- Adversarial noise attacks of deep learning architectures: stability analysis via sparse-modeled signals (Q1988345) (← links)
- Re-thinking model robustness from stability: a new insight to defend adversarial examples (Q2102317) (← links)
- Black-box adversarial attacks by manipulating image attributes (Q2123528) (← links)
- Achieving adversarial robustness via sparsity (Q2127259) (← links)
- On the regularized risk of distributionally robust learning over deep neural networks (Q2168882) (← links)
- A robust outlier control framework for classification designed with family of homotopy loss function (Q2188214) (← links)
- Spanning attack: reinforce black-box attacks with unlabeled data (Q2217425) (← links)
- Compositional falsification of cyber-physical systems with machine learning components (Q2331078) (← links)
- Adversarial classification via distributional robustness with Wasserstein ambiguity (Q2693647) (← links)
- (Q5053226) (← links)
- (Q5053308) (← links)
- Achieving Adversarial Robustness Requires An Active Teacher (Q5079538) (← links)
- Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review (Q5382481) (← links)
- Metrics and methods for robustness evaluation of neural networks with generative models (Q6053812) (← links)
- Adversarial Robustness of Sparse Local Lipschitz Predictors (Q6070302) (← links)