Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class

From MaRDI portal
Publication:6137593

DOI10.1137/21M144431XarXiv2103.00542OpenAlexW4385985820MaRDI QIDQ6137593FDOQ6137593

Xiliang Lu, Fengru Wang, Y. M. Lai, Yuanyuan Yang, Jerry Zhijian Yang, Yu Ling Jiao

Publication date: 4 September 2023

Published in: SIAM Journal on Mathematical Analysis (Search for Journal in Brave)

Abstract: In this paper, we construct neural networks with ReLU, sine and 2x as activation functions. For general continuous f defined on [0,1]d with continuity modulus omegaf(cdot), we construct ReLU-sine-2x networks that enjoy an approximation rate mathcalO(omegaf(sqrtd)cdot2M+omegafleft(fracsqrtdNight)), where M,NinmathbbN+ denote the hyperparameters related to widths of the networks. As a consequence, we can construct ReLU-sine-2x network with the depth 5 and width maxleftleftlceil2d3/2left(frac3muepsilonight)1/alphaightceil,2leftlceillog2frac3mudalpha/22epsilonightceil+2ight that approximates finmathcalHmualpha([0,1]d) within a given tolerance epsilon>0 measured in Lp norm pin[1,infty), where mathcalHmualpha([0,1]d) denotes the H"older continuous function class defined on [0,1]d with order alphain(0,1] and constant mu>0. Therefore, the ReLU-sine-2x networks overcome the curse of dimensionality on mathcalHmualpha([0,1]d). In addition to its supper expressive power, functions implemented by ReLU-sine-2x networks are (generalized) differentiable, enabling us to apply SGD to train.


Full work available at URL: https://arxiv.org/abs/2103.00542







Cites Work


Cited In (3)





This page was built for publication: Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6137593)