Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula>

From MaRDI portal
Publication:4562132

DOI10.1109/TIT.2018.2874447zbMath1432.41003arXiv1607.07819OpenAlexW2803636134MaRDI QIDQ4562132

Andrew R. Barron, Jason M. Klusowski

Publication date: 18 December 2018

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1607.07819




Related Items (33)

Deep Adaptive Basis Galerkin Method for High-Dimensional Evolution Equations With Oscillatory SolutionsNeural network approximationApproximation properties of deep ReLU CNNsUniform approximation rates and metric entropy of shallow neural networksReLU deep neural networks from the hierarchical basis perspectiveNonconvex regularization for sparse neural networksDeep Ritz Method for the Spectral Fractional Laplacian Equation Using the Caffarelli--Silvestre ExtensionTheory of deep convolutional neural networks: downsamplingDeep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional ProblemsTheory of deep convolutional neural networks. III: Approximating radial functionsRates of approximation by ReLU shallow neural networksA priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problemsCharacterization of the variation spaces corresponding to shallow neural networksApproximation bounds for random neural networks and reservoir systemsSignReLU neural network and its approximation abilityError bounds for approximations using multichannel deep convolutional neural networks with downsamplingLower bounds for artificial neural network approximations: a proof that shallow neural networks fail to overcome the curse of dimensionalityDeep learning theory of distribution regression with CNNsApproximation of nonlinear functionals using deep ReLU networksLearning ability of interpolating deep convolutional neural networksTwo-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximationCausal inference of general treatment effects using neural networks with a diverging number of confoundersGreedy training algorithms for neural networks and applications to PDEsUniversality of deep convolutional neural networksTheory of deep convolutional neural networks. II: Spherical analysisUnnamed ItemUnnamed ItemApproximation of functions from korobov spaces by deep convolutional neural networksRepresentation formulas and pointwise properties for Barron functionsApproximating functions with multi-features by deep convolutional neural networksDepth separations in neural networks: what is actually being separated?High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functionsA New Function Space from Barron Class and Application to Neural Network Approximation




This page was built for publication: Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula>