Safety verification of deep neural networks
From MaRDI portal
Publication:2151229
DOI10.1007/978-3-319-63387-9_1zbMath1494.68166arXiv1610.06940OpenAlexW2543296129MaRDI QIDQ2151229
Publication date: 1 July 2022
Full work available at URL: https://arxiv.org/abs/1610.06940
deep neural networksadversarial examplesadversarial perturbationsfast gradient sign method (FGSM)German traffic sign recognition benchmark (GTSRB)
Artificial neural networks and deep learning (68T07) Specification and verification (program logics, model checking, etc.) (68Q60) Machine vision and scene understanding (68T45)
Related Items (34)
Deep Statistical Model Checking ⋮ DiffRNN: differential verification of recurrent neural networks ⋮ \textsf{BDD4BNN}: a BDD-based quantitative analysis framework for binarized neural networks ⋮ Verisig 2.0: verification of neural network controllers using Taylor model preconditioning ⋮ Robustness verification of semantic segmentation neural networks using relaxed reachability ⋮ Static analysis of ReLU neural networks with tropical polyhedra ⋮ Exploiting verified neural networks via floating point numerical error ⋮ Toward neural-network-guided program synthesis and verification ⋮ Learning finite state models from recurrent neural networks ⋮ Learning for Constrained Optimization: Identifying Optimal Active Constraint Sets ⋮ Global optimization of objective functions represented by ReLU networks ⋮ Metrics and methods for robustness evaluation of neural networks with generative models ⋮ Sparse polynomial optimisation for neural network verification ⋮ Risk-aware shielding of partially observable Monte Carlo planning policies ⋮ Adversarial vulnerability bounds for Gaussian process classification ⋮ \textsf{CLEVEREST}: accelerating CEGAR-based neural network verification via adversarial attacks ⋮ Neural Network Verification Using Residual Reasoning ⋮ Reachability analysis of deep ReLU neural networks using facet-vertex incidence ⋮ Towards a unifying logical framework for neural networks ⋮ Verifying feedforward neural networks for classification in Isabelle/HOL ⋮ Linear temporal public announcement logic: a new perspective for reasoning about the knowledge of multi-classifiers ⋮ Automatic Abstraction Refinement in Neural Network Verification using Sensitivity Analysis ⋮ Safety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper). ⋮ An SMT-based approach for verifying binarized neural networks ⋮ SyReNN: a tool for analyzing deep neural networks ⋮ Verification of piecewise deep neural networks: a star set approach with zonotope pre-filter ⋮ Probabilistic guarantees for safe deep reinforcement learning ⋮ How Many Bits Does it Take to Quantize Your Neural Network? ⋮ A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability ⋮ A game-based approximate verification of deep neural networks with provable guarantees ⋮ Improving neural network verification through spurious region guided refinement ⋮ Enhancing robustness verification for deep neural networks via symbolic propagation ⋮ Compositional falsification of cyber-physical systems with machine learning components ⋮ Task-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes
This page was built for publication: Safety verification of deep neural networks