An abstraction-based framework for neural network verification
From MaRDI portal
Publication:2225075
Abstract: Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these approaches afford limited scalability, and applying them to large networks can be challenging. In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network - thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counterexample. Under such conditions, we perform counterexample-guided refinement to adjust the approximation, and then repeat the process. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou's performance. Our experiments demonstrate the great potential of our approach for verifying larger neural networks.
Recommendations
Cited in
(32)- Reluplex: a calculus for reasoning about deep neural networks
- Abstract neural networks
- Property-directed verification and robustness certification of recurrent neural networks
- Verifying the generalization of deep learning to out-of-distribution domains
- \textsf{BDD4BNN}: a BDD-based quantitative analysis framework for binarized neural networks
- Robustness verification of quantum classifiers
- Verisig 2.0: verification of neural network controllers using Taylor model preconditioning
- T4V: exploring neural network architectures that improve the scalability of neural network verification
- Improving neural network verification through spurious region guided refinement
- Example Guided Synthesis of Linear Approximations for Neural Network Verification
- Towards global neural network abstractions with locally-exact reconstruction
- Generating probabilistic safety guarantees for neural network controllers
- A game-based approximate verification of deep neural networks with provable guarantees
- Boosting robustness verification of semantic feature neighborhoods
- Bisimulations for neural network reduction
- On neural network equivalence checking using SMT solvers
- Neural Network Verification Using Residual Reasoning
- Enhancing robustness verification for deep neural networks via symbolic propagation
- \textsf{QEBVerif}: quantization error bound verification of neural networks
- \textsf{CLEVEREST}: accelerating CEGAR-based neural network verification via adversarial attacks
- Probabilistic Verification of Neural Networks Against Group Fairness
- SMT-based modeling and verification of spiking neural networks: a case study
- Towards formal XAI: formally approximate minimal explanations of neural networks
- Verification of deep convolutional neural networks using ImageStars
- Challenging SMT solvers to verify neural networks
- Metrics and methods for robustness evaluation of neural networks with generative models
- Exploiting verified neural networks via floating point numerical error
- \textsc{OccRob}: efficient SMT-based occlusion robustness verification of deep neural networks
- An SMT-based approach for verifying binarized neural networks
- Chordal sparsity for SDP-based neural network verification
- Reachability analysis of deep ReLU neural networks using facet-vertex incidence
- Guaranteed approximation error estimation of neural networks and model modification
This page was built for publication: An abstraction-based framework for neural network verification
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2225075)