Optimal nonlinear approximation

From MaRDI portal
Publication:1824130

DOI10.1007/BF01171759zbMath0682.41033OpenAlexW1990261069MaRDI QIDQ1824130

Ralph Howard, Ronald A. DeVore, Charles A. Micchelli

Publication date: 1989

Published in: Manuscripta Mathematica (Search for Journal in Brave)

Full work available at URL: https://eudml.org/doc/155392



Related Items

An approach for recovering initial temperature via a bounded linear time sampling, Manifold Approximations via Transported Subspaces: Model Reduction for Transport-Dominated Problems, On approximating initial data in some linear evolutionary equations involving fraction Laplacian, Neural network approximation, When is approximation by Gaussian networks necessarily a linear process?, Bernstein numbers of embeddings of isotropic and dominating mixed Besov spaces, Weyl and Bernstein numbers of embeddings of Sobolev spaces with dominating mixed smoothness, A deep learning approach to Reduced Order Modelling of parameter dependent partial differential equations, Approximation properties of a multilayered feedforward artificial neural network, Wavelet compression and nonlinear \(n\)-widths, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Recovery of an initial temperature from discrete sampling, Nonlinear function approximation: computing smooth solutions with an adaptive greedy algorithm, Optimal approximation of elliptic problems by linear and nonlinear mappings. IV: Errors in \(L_{2}\) and other norms, Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions, Limitations of the approximation capabilities of neural networks with one hidden layer, Probabilistic lower bounds for approximation by shallow perceptron networks, Optimal stable nonlinear approximation, Metric entropy, \(n\)-widths, and sampling of functions on manifolds, Simultaneous neural network approximation for smooth functions, A deep network construction that adapts to intrinsic dimensionality beyond the domain, Rates of approximation by ReLU shallow neural networks, Best approximation by linear combinations of characteristic functions of half-spaces., Approximation theory of tree tensor networks: tensorized univariate functions, SignReLU neural network and its approximation ability, Optimal approximation of infinite-dimensional holomorphic functions, Non-linear manifold reduced-order models with convolutional autoencoders and reduced over-collocation method, Local approximation of operators, Computation and learning in high dimensions. Abstracts from the workshop held August 1--7, 2021 (hybrid meeting), Lipschitz widths, Multivariate Approximation in Downward Closed Polynomial Spaces, On Linear Versus Nonlinear Approximation in the Average Case Setting, Approximation in shift-invariant spaces with deep ReLU neural networks, Continuous algorithms in \(n\)-term approximation and nonlinear widths, Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs, Applications of classical approximation theory to periodic basis function networks and computational harmonic analysis, Optimal approximation of elliptic problems by linear and nonlinear mappings. III: Frames, Best \(m\)-term trigonometric approximation of periodic functions of several variables from Nikol'skii-Besov classes for small smoothness, Deep Network Approximation for Smooth Functions, Better Approximations of High Dimensional Smooth Functions by Deep Neural Networks with Rectified Power Units, Approximation by superposition of sigmoidal and radial basis functions, Neural networks for optimal approximation of continuous functions on the unit sphere, Optimal approximation of elliptic problems by linear and nonlinear mappings. I, Complexity of neural network approximation with limited information: A worst case approach, Optimal approximation of elliptic problems by linear and nonlinear mappings. II, On almost-best approximation by piecewise polynomial functions in the space \(C[0, 1\)], Approximating networks and extended Ritz method for the solution of functional optimization problems, Linearized two-layers neural networks in high dimension, Error bounds for approximations with deep ReLU networks, Time-variant system approximation via later-time samples, Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem, On nonlinear 𝑛-widths, Evolutionary Systems Representations Based on Later Time Samples and Applications to PDEs, Nonlinear widths of classes of smooth functions defined on the unit sphere in \(\mathbb R^d\), Geometry and topology of continuous best and near best approximations, Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth, Neural Networks for Functional Approximation and System Identification, On best approximation by ridge functions, Sampling, Metric Entropy, and Dimensionality Reduction, Super-resolution meets machine learning: approximation of measures, Approximation of Smoothness Classes by Deep Rectifier Networks, On best approximation of classes by radial functions, Nonlinear approximation and (deep) ReLU networks, On near-optimal time samplings for initial data best approximation



Cites Work