Safety verification of deep neural networks

From MaRDI portal
Publication:2151229

DOI10.1007/978-3-319-63387-9_1zbMath1494.68166arXiv1610.06940OpenAlexW2543296129MaRDI QIDQ2151229

Yanyan Li

Publication date: 1 July 2022

Full work available at URL: https://arxiv.org/abs/1610.06940




Related Items (34)

Deep Statistical Model CheckingDiffRNN: differential verification of recurrent neural networks\textsf{BDD4BNN}: a BDD-based quantitative analysis framework for binarized neural networksVerisig 2.0: verification of neural network controllers using Taylor model preconditioningRobustness verification of semantic segmentation neural networks using relaxed reachabilityStatic analysis of ReLU neural networks with tropical polyhedraExploiting verified neural networks via floating point numerical errorToward neural-network-guided program synthesis and verificationLearning finite state models from recurrent neural networksLearning for Constrained Optimization: Identifying Optimal Active Constraint SetsGlobal optimization of objective functions represented by ReLU networksMetrics and methods for robustness evaluation of neural networks with generative modelsSparse polynomial optimisation for neural network verificationRisk-aware shielding of partially observable Monte Carlo planning policiesAdversarial vulnerability bounds for Gaussian process classification\textsf{CLEVEREST}: accelerating CEGAR-based neural network verification via adversarial attacksNeural Network Verification Using Residual ReasoningReachability analysis of deep ReLU neural networks using facet-vertex incidenceTowards a unifying logical framework for neural networksVerifying feedforward neural networks for classification in Isabelle/HOLLinear temporal public announcement logic: a new perspective for reasoning about the knowledge of multi-classifiersAutomatic Abstraction Refinement in Neural Network Verification using Sensitivity AnalysisSafety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper).An SMT-based approach for verifying binarized neural networksSyReNN: a tool for analyzing deep neural networksVerification of piecewise deep neural networks: a star set approach with zonotope pre-filterProbabilistic guarantees for safe deep reinforcement learningHow Many Bits Does it Take to Quantize Your Neural Network?A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretabilityA game-based approximate verification of deep neural networks with provable guaranteesImproving neural network verification through spurious region guided refinementEnhancing robustness verification for deep neural networks via symbolic propagationCompositional falsification of cyber-physical systems with machine learning componentsTask-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes




This page was built for publication: Safety verification of deep neural networks