Exploiting verified neural networks via floating point numerical error
From MaRDI portal
Publication:2145326
DOI10.1007/978-3-030-88806-0_9zbMath1497.68309arXiv2003.03021OpenAlexW3210373155MaRDI QIDQ2145326
Publication date: 17 June 2022
Full work available at URL: https://arxiv.org/abs/2003.03021
Artificial neural networks and deep learning (68T07) Roundoff error (65G50) Specification and verification (program logics, model checking, etc.) (68Q60) Networks and circuits as models of computation; circuit complexity (68Q06)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Safe bounds in linear and mixed-integer linear programming
- Deep neural networks and mixed integer linear optimization
- Safety verification of deep neural networks
- Reluplex: an efficient SMT solver for verifying deep neural networks
- Verifying binarized neural networks by Angluin-style learning
- An Abstract Interpretation Framework for the Round-Off Error Analysis of Floating-Point Programs
- Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks
- Programming Languages and Systems
This page was built for publication: Exploiting verified neural networks via floating point numerical error