An abstraction-based framework for neural network verification

From MaRDI portal
Publication:2225075

DOI10.1007/978-3-030-53288-8_3zbMATH Open1478.68154arXiv1910.14574OpenAlexW3102139284MaRDI QIDQ2225075FDOQ2225075


Authors: Yizhak Yisrael Elboher, Justin Gottschlich, Guy Katz Edit this on Wikidata


Publication date: 4 February 2021

Abstract: Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these approaches afford limited scalability, and applying them to large networks can be challenging. In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network - thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counterexample. Under such conditions, we perform counterexample-guided refinement to adjust the approximation, and then repeat the process. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou's performance. Our experiments demonstrate the great potential of our approach for verifying larger neural networks.


Full work available at URL: https://arxiv.org/abs/1910.14574




Recommendations




Cited In (26)





This page was built for publication: An abstraction-based framework for neural network verification

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2225075)