On the tractability of multivariate integration and approximation by neural networks
From MaRDI portal
(Redirected from Publication:876822)
Recommendations
- Neural-network approximation of functions of several variables
- Approximations by multivariate perturbed neural network operators
- Neural network modeling of vector multivariable functions in ill-posed approximation problems
- The errors of simultaneous approximation of multivariate functions by neural networks
- An Integral Upper Bound for Neural Network Approximation
- Degree of simultaneous approximation of multivarate functions and their derivatives by neural networks
- Neural networks and the best trigomometric approximation
- Neural networks for optimal approximation of continuous functions in \(\mathbb R^d\)
Cites work
- scientific article; zbMATH DE number 1222809 (Why is no real title available?)
- scientific article; zbMATH DE number 3435823 (Why is no real title available?)
- scientific article; zbMATH DE number 3437452 (Why is no real title available?)
- scientific article; zbMATH DE number 1977310 (Why is no real title available?)
- scientific article; zbMATH DE number 1391397 (Why is no real title available?)
- Bounds on rates of variable-basis and neural-network approximation
- Comparison of worst case errors in linear and neural network approximation
- Convergence of stochastic processes
- Dimension-independent bounds on the degree of approximation by neural networks
- Intractability results for integration and discrepancy
- On tractability of weighted integration over bounded and unbounded regions in ℝ^{𝕤}
- Random approximants and neural networks
- The inverse of the star-discrepancy depends linearly on the dimension
- Uniform approximation by neural networks
- Universal approximation bounds for superpositions of a sigmoidal function
- Weak convergence and empirical processes. With applications to statistics
- When are integration and discrepancy tractable?
Cited in
(28)- Approximation of Sobolev classes by polynomials and ridge functions
- A low discrepancy sequence on graphs
- Correlations of random classifiers on large data sets
- Function approximation with zonal function networks with activation functions analogous to the rectified linear unit functions
- A tribute to Géza Freud
- The Barron space and the flow-induced function spaces for neural network models
- Negative results for approximation using single layer and multilayer feedforward neural networks
- Super-resolution meets machine learning: approximation of measures
- Suboptimal solutions to dynamic optimization problems via approximations of the policy functions
- Provable approximation properties for deep neural networks
- Almost optimal estimates for approximation and learning by radial basis function networks
- Accuracy of approximations of solutions to Fredholm equations by kernel methods
- Estimates of variation with respect to a set and applications to optimization problems
- Complexity of Gaussian-radial-basis networks approximating smooth functions
- On energy, discrepancy and group invariant measures on measurable subsets of Euclidean space
- Bracketing numbers for axis-parallel boxes and applications to geometric discrepancy
- Approximation by neural networks with weights varying on a finite set of directions
- Bounds and constructions for the star-discrepancy via \(\delta\)-covers
- Weighted variation spaces and approximation by shallow ReLU networks
- Construct theory of hierarchical functional networks and its application in multiple numerical integral
- Two fast and accurate heuristic RBF learning rules for data classification
- Complexity estimates based on integral transforms induced by computational units
- A deep network construction that adapts to intrinsic dimensionality beyond the domain
- Entropy, Randomization, Derandomization, and Discrepancy
- Weighted quadrature formulas and approximation by zonal function networks on the sphere
- Improved bounds for the bracketing number of orthants or revisiting an algorithm of Thiémard to compute bounds for the star discrepancy
- Advanced Monte Carlo Methods to Neural Networks
- On the problem of parameter estimation in exponential sums
This page was built for publication: On the tractability of multivariate integration and approximation by neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q876822)