Generating probabilistic safety guarantees for neural network controllers
From MaRDI portal
Publication:6134350
Abstract: Neural networks serve as effective controllers in a variety of complex settings due to their ability to represent expressive policies. The complex nature of neural networks, however, makes their output difficult to verify and predict, which limits their use in safety-critical applications. While simulations provide insight into the performance of neural network controllers, they are not enough to guarantee that the controller will perform safely in all scenarios. To address this problem, recent work has focused on formal methods to verify properties of neural network outputs. For neural network controllers, we can use a dynamics model to determine the output properties that must hold for the controller to operate safely. In this work, we develop a method to use the results from neural network verification tools to provide probabilistic safety guarantees on a neural network controller. We develop an adaptive verification approach to efficiently generate an overapproximation of the neural network policy. Next, we modify the traditional formulation of Markov decision process (MDP) model checking to provide guarantees on the overapproximated policy given a stochastic dynamics model. Finally, we incorporate techniques in state abstraction to reduce overapproximation error during the model checking process. We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks that are loosely inspired by Airborne Collision Avoidance System X (ACAS X), a family of collision avoidance systems that formulates the problem as a partially observable Markov decision process (POMDP).
Recommendations
- Learning safe neural network controllers with barrier certificates
- Learning safe neural network controllers with barrier certificates
- Verisig 2.0: verification of neural network controllers using Taylor model preconditioning
- An abstraction-based framework for neural network verification
- Verifying low-dimensional input neural networks via input quantization
Cites work
- scientific article; zbMATH DE number 1884411 (Why is no real title available?)
- scientific article; zbMATH DE number 5585443 (Why is no real title available?)
- Decision making under uncertainty. Theory and application. With contributions from Christopher Amato, Girish Chowdhary, Jonathan P. How, Hayley J. Davison Reynolds, Jason R. Thornton, Pedro A. Torres-Carrasquillo, N. Kemal Üre and John Vian
- On the Theory of Dynamic Programming
- Reachability analysis for neural feedback systems using regressive polynomial rule inference
- Reluplex: an efficient SMT solver for verifying deep neural networks
- Variable resolution discretization in optimal control
- Verisig
Cited in
(5)- Risk verification of stochastic systems with neural network controllers
- Neural flocking: MPC-based supervised learning of flocking controllers
- Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming
- Learning safe neural network controllers with barrier certificates
- Learning safe neural network controllers with barrier certificates
This page was built for publication: Generating probabilistic safety guarantees for neural network controllers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6134350)