A domain-theoretic framework for robustness analysis of neural networks
From MaRDI portal
Publication:6149907
Abstract: A domain-theoretic framework is presented for validated robustness analysis of neural networks. First, global robustness of a general class of networks is analyzed. Then, using the fact that Edalat's domain-theoretic L-derivative coincides with Clarke's generalized gradient, the framework is extended for attack-agnostic local robustness analysis. The proposed framework is ideal for designing algorithms which are correct by construction. This claim is exemplified by developing a validated algorithm for estimation of Lipschitz constant of feedforward regressors. The completeness of the algorithm is proved over differentiable networks, and also over general position ReLU networks. Computability results are obtained within the framework of effectively given domains. Using the proposed domain model, differentiable and non-differentiable networks can be analyzed uniformly. The validated algorithm is implemented using arbitrary-precision interval arithmetic, and the results of some experiments are presented. The software implementation is truly validated, as it handles floating-point errors as well.
Cites work
- scientific article; zbMATH DE number 3706504 (Why is no real title available?)
- scientific article; zbMATH DE number 46303 (Why is no real title available?)
- scientific article; zbMATH DE number 52121 (Why is no real title available?)
- scientific article; zbMATH DE number 1222580 (Why is no real title available?)
- scientific article; zbMATH DE number 524106 (Why is no real title available?)
- scientific article; zbMATH DE number 1022519 (Why is no real title available?)
- scientific article; zbMATH DE number 1113627 (Why is no real title available?)
- scientific article; zbMATH DE number 1460545 (Why is no real title available?)
- scientific article; zbMATH DE number 2206777 (Why is no real title available?)
- A Domain-Theoretic Account of Picard's Theorem
- A continuous derivative for real-valued functions
- A domain-theoretic approach to computability on the real line
- A language for differentiable functions
- Abstract Interpretation, Logical Relations, and Kan Extensions
- Abstract neural networks
- Benign overfitting in linear regression
- Clarke's generalized gradient and Edalat's L-derivative
- Compositional semantics of dataflow networks with query-driven communication of exact values
- Concrete models of computation for topological algebras
- Continuous Lattices and Domains
- Denotational semantics of hybrid automata
- Domain theoretic second-order Euler's method for solving initial value problems
- Domain theory and differential calculus (functions of one variable)
- Domains for Computation in Mathematics, Physics and Exact Real Arithmetic
- Dynamical systems, measures, and fractals via domain theory
- Effectively given domains
- Exploiting verified neural networks via floating point numerical error
- Extreme value theory. An introduction.
- Integration in Real PCF
- Introduction to Interval Analysis
- Motivations for an arbitrary precision interval arithmetic and the MPFI library
- Non-Hausdorff topology and domain theory. Selected topics in point-set topology
- Properly injective spaces and function spaces
- Real number computability and domain theory
- SHRAD: A language for sequential real number computation
- Safe \& robust reachability analysis of hybrid systems
- Semantics of query-driven communication of exact values
- Strictness analysis for higher-order functions
- System analysis and robustness
- The way-below relation of function spaces over semantic domains
This page was built for publication: A domain-theoretic framework for robustness analysis of neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6149907)