Better approximations of high dimensional smooth functions by deep neural networks with rectified power units

From MaRDI portal
Publication:5162006

DOI10.4208/CICP.OA-2019-0168zbMATH Open1474.65031arXiv1903.05858MaRDI QIDQ5162006FDOQ5162006


Authors: Bo Li, Shanshan Tang, Haijun Yu Edit this on Wikidata


Publication date: 1 November 2021

Published in: Communications in Computational Physics (Search for Journal in Brave)

Abstract: Deep neural networks with rectified linear units (ReLU) are getting more and more popular due to their universal representation power and successful applications. Some theoretical progress regarding the approximation power of deep ReLU network for functions in Sobolev space and Korobov space have recently been made by [D. Yarotsky, Neural Network, 94:103-114, 2017] and [H. Montanelli and Q. Du, SIAM J Math. Data Sci., 1:78-92, 2019], etc. In this paper, we show that deep networks with rectified power units (RePU) can give better approximations for smooth functions than deep ReLU networks. Our analysis bases on classical polynomial approximation theory and some efficient algorithms proposed in this paper to convert polynomials into deep RePU networks of optimal size with no approximation error. Comparing to the results on ReLU networks, the sizes of RePU networks required to approximate functions in Sobolev space and Korobov space with an error tolerance varepsilon, by our constructive proofs, are in general mathcalO(logfrac1varepsilon) times smaller than the sizes of corresponding ReLU networks constructed in most of the existing literature. Comparing to the classical results of Mhaskar [Mhaskar, Adv. Comput. Math. 1:61-80, 1993], our constructions use less number of activation functions and numerically more stable, they can be served as good initials of deep RePU networks and further trained to break the limit of linear approximation theory. The functions represented by RePU networks are smooth functions, so they naturally fit in the places where derivatives are involved in the loss function.


Full work available at URL: https://arxiv.org/abs/1903.05858




Recommendations




Cites Work


Cited In (21)

Uses Software





This page was built for publication: Better approximations of high dimensional smooth functions by deep neural networks with rectified power units

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5162006)