Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula>
From MaRDI portal
Publication:4562132
DOI10.1109/TIT.2018.2874447zbMath1432.41003arXiv1607.07819OpenAlexW2803636134MaRDI QIDQ4562132
Andrew R. Barron, Jason M. Klusowski
Publication date: 18 December 2018
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1607.07819
Approximation by other special function classes (41A30) Sampling theory in information and communication theory (94A20)
Related Items (33)
Deep Adaptive Basis Galerkin Method for High-Dimensional Evolution Equations With Oscillatory Solutions ⋮ Neural network approximation ⋮ Approximation properties of deep ReLU CNNs ⋮ Uniform approximation rates and metric entropy of shallow neural networks ⋮ ReLU deep neural networks from the hierarchical basis perspective ⋮ Nonconvex regularization for sparse neural networks ⋮ Deep Ritz Method for the Spectral Fractional Laplacian Equation Using the Caffarelli--Silvestre Extension ⋮ Theory of deep convolutional neural networks: downsampling ⋮ Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems ⋮ Theory of deep convolutional neural networks. III: Approximating radial functions ⋮ Rates of approximation by ReLU shallow neural networks ⋮ A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems ⋮ Characterization of the variation spaces corresponding to shallow neural networks ⋮ Approximation bounds for random neural networks and reservoir systems ⋮ SignReLU neural network and its approximation ability ⋮ Error bounds for approximations using multichannel deep convolutional neural networks with downsampling ⋮ Lower bounds for artificial neural network approximations: a proof that shallow neural networks fail to overcome the curse of dimensionality ⋮ Deep learning theory of distribution regression with CNNs ⋮ Approximation of nonlinear functionals using deep ReLU networks ⋮ Learning ability of interpolating deep convolutional neural networks ⋮ Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation ⋮ Causal inference of general treatment effects using neural networks with a diverging number of confounders ⋮ Greedy training algorithms for neural networks and applications to PDEs ⋮ Universality of deep convolutional neural networks ⋮ Theory of deep convolutional neural networks. II: Spherical analysis ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Approximation of functions from korobov spaces by deep convolutional neural networks ⋮ Representation formulas and pointwise properties for Barron functions ⋮ Approximating functions with multi-features by deep convolutional neural networks ⋮ Depth separations in neural networks: what is actually being separated? ⋮ High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions ⋮ A New Function Space from Barron Class and Application to Neural Network Approximation
This page was built for publication: Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula>