A game-based approximate verification of deep neural networks with provable guarantees
From MaRDI portal
Publication:2286751
Abstract: Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations. We demonstrate that, under the assumption of Lipschitz continuity, both problems can be approximated using finite optimisation by discretising the input space, and the approximation has provable guarantees, i.e., the error is bounded. We then show that the resulting optimisation problems can be reduced to the solution of two-player turn-based games, where the first player selects features and the second perturbs the image within the feature. While the second player aims to minimise the distance to an adversarial example, depending on the optimisation objective the first player can be cooperative or competitive. We employ an anytime approach to solve the games, in the sense of approximating the value of a game by monotonically improving its upper and lower bounds. The Monte Carlo tree search algorithm is applied to compute upper bounds for both games, and the Admissible A* and the Alpha-Beta Pruning algorithms are, respectively, used to compute lower bounds for the maximum safety radius and feature robustness games. When working on the upper bound of the maximum safe radius problem, our tool demonstrates competitive performance against existing adversarial example crafting algorithms. Furthermore, we show how our framework can be deployed to evaluate pointwise robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.
Recommendations
- Safety verification of deep neural networks
- DeepSafe: a data-driven approach for assessing robustness of neural networks
- Enhancing robustness verification for deep neural networks via symbolic propagation
- Reluplex: a calculus for reasoning about deep neural networks
- An abstraction-based framework for neural network verification
Cites work
Cited in
(14)- Enhancing robustness verification for deep neural networks via symbolic propagation
- Robustness verification of semantic segmentation neural networks using relaxed reachability
- Constructing the basis path set by eliminating the path dependency
- DeepSafe: a data-driven approach for assessing robustness of neural networks
- Bridging formal methods and machine learning with model checking and global optimisation
- Brief Announcement
- A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability
- Improving neural network verification through spurious region guided refinement
- Safety verification of deep neural networks
- Bicriteria algorithms to balance coverage and cost in team formation under online model
- Safety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper).
- Generalizing universal adversarial perturbations for deep neural networks
- Bicriteria streaming algorithms to balance gain and cost with cardinality constraint
- Reachability analysis of deep ReLU neural networks using facet-vertex incidence
This page was built for publication: A game-based approximate verification of deep neural networks with provable guarantees
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2286751)