Approximation of smoothness classes by deep rectifier networks
From MaRDI portal
Publication:5020751
Abstract: We consider approximation rates of sparsely connected deep rectified linear unit (ReLU) and rectified power unit (RePU) neural networks for functions in Besov spaces in arbitrary dimension , on general domains. We show that alert{deep rectifier} networks with a fixed activation function attain optimal or near to optimal approximation rates for functions in the Besov space on the critical embedding line for emph{arbitrary} smoothness order . Using interpolation theory, this implies that the entire range of smoothness classes at or above the critical line is (near to) optimally approximated by deep ReLU/RePU networks.
Recommendations
- Better approximations of high dimensional smooth functions by deep neural networks with rectified power units
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Approximation spaces of deep neural networks
- Provable approximation properties for deep neural networks
- Error bounds for approximations with deep ReLU networks
Cites work
- scientific article; zbMATH DE number 1215245 (Why is no real title available?)
- scientific article; zbMATH DE number 2001584 (Why is no real title available?)
- scientific article; zbMATH DE number 278904 (Why is no real title available?)
- scientific article; zbMATH DE number 2208228 (Why is no real title available?)
- scientific article; zbMATH DE number 3329342 (Why is no real title available?)
- Besov Spaces on Domains in ℝ d
- Biorthogonal bases of compactly supported wavelets
- Deep ReLU networks and high-order finite element methods
- Error bounds for approximations with deep ReLU networks
- Maximal functions measuring smoothness
- On nonlinear 𝑛-widths
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Optimal approximation with sparsely connected deep neural networks
- Optimal nonlinear approximation
- Orthogonal Polynomials and the Construction of Piecewise Polynomial Smooth Wavelets
- Quasiconformal mappings and extendability of functions in Sobolev spaces
- Wavelet coefficients measuring smoothness in \(H_ p(\mathbb{R}^ d)\)
- \(H^p\) spaces of several variables
Cited in
(14)- Weighted variation spaces and approximation by shallow ReLU networks
- Collocation approximation by deep neural ReLU networks for parametric and stochastic PDEs with lognormal inputs
- Optimal approximation with sparsely connected deep neural networks
- Neural network approximation
- Deep ReLU neural network approximation in Bochner spaces and applications to parametric PDEs
- High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions
- PowerNet: efficient representations of polynomials and smooth functions by deep neural networks with rectified power units
- Better approximations of high dimensional smooth functions by deep neural networks with rectified power units
- Approximation theory of tree tensor networks: tensorized univariate functions
- Convergence rates of deep ReLU networks for multiclass classification
- Sobolev-type embeddings for neural network approximation spaces
- Integral representations of shallow neural network with Rectified Power Unit activation function
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Extracting a function encoded in amplitudes of a quantum state by tensor network and orthogonal function expansion
This page was built for publication: Approximation of smoothness classes by deep rectifier networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5020751)