High-order approximation rates for shallow neural networks with cosine and ReLU^k activation functions

From MaRDI portal
Publication:2118396

DOI10.1016/J.ACHA.2021.12.005zbMATH Open1501.41006arXiv2012.07205OpenAlexW4200096621MaRDI QIDQ2118396FDOQ2118396


Authors: Jonathan W. Siegel, Jinchao Xu Edit this on Wikidata


Publication date: 22 March 2022

Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)

Abstract: We study the approximation properties of shallow neural networks with an activation function which is a power of the rectified linear unit. Specifically, we consider the dependence of the approximation rate on the dimension and the smoothness in the spectral Barron space of the underlying function f to be approximated. We show that as the smoothness index s of f increases, shallow neural networks with ReLUk activation function obtain an improved approximation rate up to a best possible rate of O(n(k+1)log(n)) in L2, independent of the dimension d. The significance of this result is that the activation function ReLUk is fixed independent of the dimension, while for classical methods the degree of polynomial approximation or the smoothness of the wavelets used would have to increase in order to take advantage of the dimension dependent smoothness of f. In addition, we derive improved approximation rates for shallow neural networks with cosine activation function on the spectral Barron space. Finally, we prove lower bounds showing that the approximation rates attained are optimal under the given assumptions.


Full work available at URL: https://arxiv.org/abs/2012.07205




Recommendations




Cites Work


Cited In (17)





This page was built for publication: High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2118396)