A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
From MaRDI portal
Publication:1192997
DOI10.1214/AOS/1176348546zbMath0746.62060OpenAlexW2044828368WikidataQ124997995 ScholiaQ124997995MaRDI QIDQ1192997
Publication date: 27 September 1992
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1214/aos/1176348546
Hilbert spaceneuronsridge functionsneural network trainingprojection pursuit regressioniterative sequencesestimates of the rate of convergencegeneral convergence criteriongreedy basis expansion
Related Items (only showing first 100 items - show all)
Greedy algorithms for prediction ⋮ Unnamed Item ⋮ Another look at statistical learning theory and regularization ⋮ Accuracy of suboptimal solutions to kernel principal component analysis ⋮ Estimates of covering numbers of convex sets with slowly decaying orthogonal subsets ⋮ Density estimation with stagewise optimization of the empirical risk ⋮ Rescaled pure greedy algorithm for Hilbert and Banach spaces ⋮ Rates of convex approximation in non-Hilbert spaces ⋮ Approximation Bounds for Some Sparse Kernel Regression Algorithms ⋮ A receding-horizon regulator for nonlinear systems and a neural approximation ⋮ A NONPARAMETRIC ESTIMATOR FOR THE COVARIANCE FUNCTION OF FUNCTIONAL DATA ⋮ A Sobolev-type upper bound for rates of approximation by linear combinations of Heaviside plane waves ⋮ Convergence and rate of convergence of some greedy algorithms in convex optimization ⋮ Uniform approximation rates and metric entropy of shallow neural networks ⋮ ReLU deep neural networks from the hierarchical basis perspective ⋮ Nonlinear function approximation: computing smooth solutions with an adaptive greedy algorithm ⋮ High-dimensional change-point estimation: combining filtering with convex optimization ⋮ Neural network with unbounded activation functions is universal approximator ⋮ GENERALIZED CELLULAR NEURAL NETWORKS (GCNNs) CONSTRUCTED USING PARTICLE SWARM OPTIMIZATION FOR SPATIO-TEMPORAL EVOLUTIONARY PATTERN IDENTIFICATION ⋮ A novel scrambling digital image watermark algorithm based on double transform domains ⋮ A note on error bounds for approximation in inner product spaces ⋮ Some remarks on greedy algorithms ⋮ Nonlinear approximation in finite-dimensional spaces ⋮ The convex geometry of linear inverse problems ⋮ Comparison of the convergence rate of pure greedy and orthogonal greedy algorithms ⋮ Restricted polynomial regression ⋮ Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits ⋮ Estimation of projection pursuit regression via alternating linearization ⋮ Characterization of the variation spaces corresponding to shallow neural networks ⋮ Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives ⋮ A survey on universal approximation and its limits in soft computing techniques. ⋮ Complexity estimates based on integral transforms induced by computational units ⋮ Accuracy of approximations of solutions to Fredholm equations by kernel methods ⋮ Unnamed Item ⋮ Greedy training algorithms for neural networks and applications to PDEs ⋮ Approximation with neural networks activated by ramp sigmoids ⋮ Can dictionary-based computational models outperform the best linear ones? ⋮ Vector greedy algorithms ⋮ Approximation by finite mixtures of continuous density functions that vanish at infinity ⋮ Minimization of Error Functionals over Perceptron Networks ⋮ Greedy expansions with prescribed coefficients in Hilbert spaces ⋮ On \(n\)-term approximation with positive coefficients ⋮ Learning semidefinite regularizers ⋮ Approximation Properties of Ridge Functions and Extreme Learning Machines ⋮ Convergence properties of cascade correlation in function approximation. ⋮ On function recovery by neural networks based on orthogonal expansions ⋮ Finite Neuron Method and Convergence Analysis ⋮ Approximation and learning by greedy algorithms ⋮ Schwarz iterative methods: infinite space splittings ⋮ Deviation optimal learning using greedy \(Q\)-aggregation ⋮ New insights into Witsenhausen's counterexample ⋮ Some extensions of radial basis functions and their applications in artificial intelligence ⋮ Regularized vector field learning with sparse approximation for mismatch removal ⋮ Estimates of variation with respect to a set and applications to optimization problems ⋮ Approximation of functions of finite variation by superpositions of a sigmoidal function. ⋮ Some comparisons of complexity in dictionary-based and linear computational models ⋮ A note on a scale-sensitive dimension of linear bounded functionals in Banach spaces ⋮ Simultaneous greedy approximation in Banach spaces ⋮ Approximation on anisotropic Besov classes with mixed norms by standard information ⋮ Learning with generalization capability by kernel methods of bounded complexity ⋮ Regularized greedy algorithms for network training with data noise ⋮ Ridge functions and orthonormal ridgelets ⋮ Unnamed Item ⋮ Simultaneous approximation by greedy algorithms ⋮ An approximation result for nets in functional estimation ⋮ Unnamed Item ⋮ Approximation with random bases: pro et contra ⋮ Geometric Rates of Approximation by Neural Networks ⋮ Approximation by superpositions of a sigmoidal function ⋮ Complexity of Gaussian-radial-basis networks approximating smooth functions ⋮ Some problems in the theory of ridge functions ⋮ Insights into randomized algorithms for neural networks: practical issues and common pitfalls ⋮ Risk bounds for mixture density estimation ⋮ Boosting the margin: a new explanation for the effectiveness of voting methods ⋮ Models of knowing and the investigation of dynamical systems ⋮ Approximation schemes for functional optimization problems ⋮ A note on error bounds for function approximation using nonlinear networks ⋮ An Integral Upper Bound for Neural Network Approximation ⋮ Convergence analysis of convex incremental neural networks ⋮ On a greedy algorithm in the space \(L_p[0,1\)] ⋮ Harmonic analysis of neural networks ⋮ Greedy algorithms and \(M\)-term approximation with regard to redundant dictionaries ⋮ A better approximation for balls ⋮ Information-theoretic determination of minimax rates of convergence ⋮ On simultaneous approximations by radial basis function neural networks ⋮ Scalable Semidefinite Programming ⋮ Joint Inversion of Multiple Observations ⋮ Learning a function from noisy samples at a finite sparse set of points ⋮ Functional aggregation for nonparametric regression. ⋮ Local greedy approximation for nonlinear regression and neural network training. ⋮ Unnamed Item ⋮ Generalized approximate weak greedy algorithms ⋮ Greedy approximation in convex optimization ⋮ Approximation properties of local bases assembled from neural network transfer functions ⋮ Boosting with early stopping: convergence and consistency ⋮ Rates of minimization of error functionals over Boolean variable-basis functions ⋮ Generalization bounds for sparse random feature expansions ⋮ High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions ⋮ A New Function Space from Barron Class and Application to Neural Network Approximation ⋮ A mathematical perspective of machine learning
This page was built for publication: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training