Enhancing adversarial attack transferability with multi-scale feature attack
From MaRDI portal
Publication:4990044
Recommendations
- A robust generative classifier against transfer attacks based on variational auto-encoders
- Black-box adversarial attacks by manipulating image attributes
- Spanning attack: reinforce black-box attacks with unlabeled data
- Robustifying models against adversarial attacks by Langevin dynamics
- Implicit adversarial data augmentation and robustness with noise-based learning
Cited in
(13)- Generating universal adversarial perturbation with ResNet
- A robust generative classifier against transfer attacks based on variational auto-encoders
- Black-box adversarial attacks by manipulating image attributes
- Greedy attack and Gumbel attack: generating adversarial examples for discrete data
- Vulnerability of classifiers to evolutionary generated adversarial examples
- Robustifying models against adversarial attacks by Langevin dynamics
- Generalizing universal adversarial perturbations for deep neural networks
- Implicit adversarial data augmentation and robustness with noise-based learning
- An adversarial attack based on multi-objective optimization in the black-box scenario: MOEA-APGA II
- Lagrangian objective function leads to improved unforeseen attack generalization
- Spanning attack: reinforce black-box attacks with unlabeled data
- Towards improving fast adversarial training in multi-exit network
- An empirical study of derivative-free-optimization algorithms for targeted black-box attacks in deep neural networks
This page was built for publication: Enhancing adversarial attack transferability with multi-scale feature attack
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4990044)