An abstraction-based framework for neural network verification
From MaRDI portal
Publication:2225075
DOI10.1007/978-3-030-53288-8_3zbMATH Open1478.68154arXiv1910.14574OpenAlexW3102139284MaRDI QIDQ2225075FDOQ2225075
Authors: Yizhak Yisrael Elboher, Justin Gottschlich, Guy Katz
Publication date: 4 February 2021
Abstract: Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these approaches afford limited scalability, and applying them to large networks can be challenging. In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network - thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counterexample. Under such conditions, we perform counterexample-guided refinement to adjust the approximation, and then repeat the process. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou's performance. Our experiments demonstrate the great potential of our approach for verifying larger neural networks.
Full work available at URL: https://arxiv.org/abs/1910.14574
Recommendations
Artificial neural networks and deep learning (68T07) Specification and verification (program logics, model checking, etc.) (68Q60)
Cited In (26)
- Enhancing robustness verification for deep neural networks via symbolic propagation
- \textsf{BDD4BNN}: a BDD-based quantitative analysis framework for binarized neural networks
- Robustness verification of quantum classifiers
- Verisig 2.0: verification of neural network controllers using Taylor model preconditioning
- Boosting robustness verification of semantic feature neighborhoods
- On neural network equivalence checking using SMT solvers
- \textsc{OccRob}: efficient SMT-based occlusion robustness verification of deep neural networks
- Probabilistic Verification of Neural Networks Against Group Fairness
- Reluplex: a calculus for reasoning about deep neural networks
- Improving neural network verification through spurious region guided refinement
- Towards formal XAI: formally approximate minimal explanations of neural networks
- Property-directed verification and robustness certification of recurrent neural networks
- An SMT-based approach for verifying binarized neural networks
- Verifying the generalization of deep learning to out-of-distribution domains
- Bisimulations for neural network reduction
- Metrics and methods for robustness evaluation of neural networks with generative models
- Example Guided Synthesis of Linear Approximations for Neural Network Verification
- Towards global neural network abstractions with locally-exact reconstruction
- Generating probabilistic safety guarantees for neural network controllers
- \textsf{CLEVEREST}: accelerating CEGAR-based neural network verification via adversarial attacks
- Challenging SMT solvers to verify neural networks
- Neural Network Verification Using Residual Reasoning
- SMT-based modeling and verification of spiking neural networks: a case study
- \textsf{QEBVerif}: quantization error bound verification of neural networks
- Reachability analysis of deep ReLU neural networks using facet-vertex incidence
- Guaranteed approximation error estimation of neural networks and model modification
This page was built for publication: An abstraction-based framework for neural network verification
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2225075)