A domain-theoretic framework for robustness analysis of neural networks

From MaRDI portal
Publication:6149907

DOI10.1017/S0960129523000142arXiv2203.00295OpenAlexW4377822310MaRDI QIDQ6149907FDOQ6149907


Authors: Can Zhou, Yiran Li, Amin Farjudian Edit this on Wikidata


Publication date: 5 March 2024

Published in: Mathematical Structures in Computer Science (Search for Journal in Brave)

Abstract: A domain-theoretic framework is presented for validated robustness analysis of neural networks. First, global robustness of a general class of networks is analyzed. Then, using the fact that Edalat's domain-theoretic L-derivative coincides with Clarke's generalized gradient, the framework is extended for attack-agnostic local robustness analysis. The proposed framework is ideal for designing algorithms which are correct by construction. This claim is exemplified by developing a validated algorithm for estimation of Lipschitz constant of feedforward regressors. The completeness of the algorithm is proved over differentiable networks, and also over general position ReLU networks. Computability results are obtained within the framework of effectively given domains. Using the proposed domain model, differentiable and non-differentiable networks can be analyzed uniformly. The validated algorithm is implemented using arbitrary-precision interval arithmetic, and the results of some experiments are presented. The software implementation is truly validated, as it handles floating-point errors as well.


Full work available at URL: https://arxiv.org/abs/2203.00295







Cites Work






This page was built for publication: A domain-theoretic framework for robustness analysis of neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6149907)