DeepFool
From MaRDI portal
swMATH20937MaRDI QIDQ32750FDOQ32750
Author name not available (Why is that?)
Official website: https://arxiv.org/abs/1511.04599v3
Source code repository: https://github.com/lts4/deepfool
Cited In (66)
- \textsc{Treant}: training evasion-aware decision trees
- Verification of piecewise deep neural networks: a star set approach with zonotope pre-filter
- DIMBA
- 3DVerifier
- PRoA
- Paracosm
- AdversarialWaveletTraining
- GUAP
- Enhancing robustness verification for deep neural networks via symbolic propagation
- DeepGauge
- DiffRNN: differential verification of recurrent neural networks
- SyReNN: a tool for analyzing deep neural networks
- DeepMutation
- DeepXplore
- TensorFuzz
- A robust generative classifier against transfer attacks based on variational auto-encoders
- Black-box adversarial attacks by manipulating image attributes
- Achieving adversarial robustness via sparsity
- Stronger data poisoning attacks break data sanitization defenses
- A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability
- Deep learning as optimal control problems: models and numerical methods
- Analysis of classifiers' robustness to adversarial perturbations
- Scale-invariant scale-channel networks: deep networks that generalise to previously unseen scales
- Apollo
- cleverhans
- Foolbox
- Lasagne
- ORL
- Marabou
- Reluplex
- advertorch
- GXNOR-Net
- NATTACK
- self-driving-car-sim
- Generating universal adversarial perturbation with ResNet
- MimicGAN: robust projection onto image manifolds with corruption mimicking
- ART
- gemmlowp
- GoogLeNet
- DeepID3
- PRODeep
- AI2
- DL2
- FastGRNN
- gRPC
- Shiftry
- SyReNN
- The gap between theory and practice in function approximation with deep neural networks
- Adv-BNN
- ANODE
- RecurJac
- SAR2SAR
- meminf-defense
- BadNets
- MagNet
- Detecting scene-plausible perceptible backdoors in trained DNNs without access to the training set
- Active subspace of neural networks: structural analysis and universal attacks
- Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
- NNRepair
- Veritex
- Adversarial classification via distributional robustness with Wasserstein ambiguity
- Quantized convolutional neural networks through the lens of partial differential equations
- SENSE
- SemanticAdv
- Achieving adversarial robustness requires an active teacher
- Compositional falsification of cyber-physical systems with machine learning components
This page was built for software: DeepFool