On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks

From MaRDI portal
Publication:777322

DOI10.1007/S00025-020-01239-8zbMATH Open1443.62314arXiv1811.05199OpenAlexW3038177324MaRDI QIDQ777322FDOQ777322

Steffen J. Goebbels

Publication date: 7 July 2020

Published in: Results in Mathematics (Search for Journal in Brave)

Abstract: A new non-linear variant of a quantitative extension of the uniform boundedness principle is used to show sharpness of error bounds for univariate approximation by sums of sigmoid and ReLU functions. Single hidden layer feedforward neural networks with one input node perform such operations. Errors of best approximation can be expressed using moduli of smoothness of the function to be approximated (i.e., to be learned). In this context, the quantitative extension of the uniform boundedness principle indeed allows to construct counter examples that show approximation rates to be best possible. Approximation errors do not belong to the little-o class of given bounds. By choosing piecewise linear activation functions, the discussed problem becomes free knot spline approximation. Results of the present paper also hold for non-polynomial (and not piecewise defined) activation functions like inverse tangent. Based on Vapnik-Chervonenkis dimension, first results are shown for the logistic function.


Full work available at URL: https://arxiv.org/abs/1811.05199





Cites Work


Cited In (8)






This page was built for publication: On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q777322)