An analysis of training and generalization errors in shallow and deep networks

From MaRDI portal
Publication:2185668

DOI10.1016/J.NEUNET.2019.08.028zbMATH Open1434.68513arXiv1802.06266OpenAlexW2972277540WikidataQ90416114 ScholiaQ90416114MaRDI QIDQ2185668FDOQ2185668


Authors: T. Poggio, H. N. Mhaskar Edit this on Wikidata


Publication date: 5 June 2020

Published in: Neural Networks (Search for Journal in Brave)

Abstract: This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.


Full work available at URL: https://arxiv.org/abs/1802.06266




Recommendations




Cites Work


Cited In (14)





This page was built for publication: An analysis of training and generalization errors in shallow and deep networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2185668)