The Barron space and the flow-induced function spaces for neural network models

From MaRDI portal
Publication:2117337

DOI10.1007/S00365-021-09549-YzbMATH Open1490.65020arXiv1906.08039OpenAlexW3165099133MaRDI QIDQ2117337FDOQ2117337

Yanyan Li

Publication date: 21 March 2022

Published in: Constructive Approximation (Search for Journal in Brave)

Abstract: One of the key issues in the analysis of machine learning models is to identify the appropriate function space and norm for the model. This is the set of functions endowed with a quantity which can control the approximation and estimation errors by a particular machine learning model. In this paper, we address this issue for two representative neural network models: the two-layer networks and the residual neural networks. We define the Barron space and show that it is the right space for two-layer neural network models in the sense that optimal direct and inverse approximation theorems hold for functions in the Barron space. For residual neural network models, we construct the so-called flow-induced function space, and prove direct and inverse approximation theorems for this space. In addition, we show that the Rademacher complexity for bounded sets under these norms has the optimal upper bounds.


Full work available at URL: https://arxiv.org/abs/1906.08039




Recommendations




Cites Work


Cited In (36)





This page was built for publication: The Barron space and the flow-induced function spaces for neural network models

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2117337)