A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training

From MaRDI portal
Revision as of 05:27, 31 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1192997

DOI10.1214/AOS/1176348546zbMath0746.62060OpenAlexW2044828368WikidataQ124997995 ScholiaQ124997995MaRDI QIDQ1192997

Lee Kenneth Jones

Publication date: 27 September 1992

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1214/aos/1176348546




Related Items (only showing first 100 items - show all)

Greedy algorithms for predictionUnnamed ItemAnother look at statistical learning theory and regularizationAccuracy of suboptimal solutions to kernel principal component analysisEstimates of covering numbers of convex sets with slowly decaying orthogonal subsetsDensity estimation with stagewise optimization of the empirical riskRescaled pure greedy algorithm for Hilbert and Banach spacesRates of convex approximation in non-Hilbert spacesApproximation Bounds for Some Sparse Kernel Regression AlgorithmsA receding-horizon regulator for nonlinear systems and a neural approximationA NONPARAMETRIC ESTIMATOR FOR THE COVARIANCE FUNCTION OF FUNCTIONAL DATAA Sobolev-type upper bound for rates of approximation by linear combinations of Heaviside plane wavesConvergence and rate of convergence of some greedy algorithms in convex optimizationUniform approximation rates and metric entropy of shallow neural networksReLU deep neural networks from the hierarchical basis perspectiveNonlinear function approximation: computing smooth solutions with an adaptive greedy algorithmHigh-dimensional change-point estimation: combining filtering with convex optimizationNeural network with unbounded activation functions is universal approximatorGENERALIZED CELLULAR NEURAL NETWORKS (GCNNs) CONSTRUCTED USING PARTICLE SWARM OPTIMIZATION FOR SPATIO-TEMPORAL EVOLUTIONARY PATTERN IDENTIFICATIONA novel scrambling digital image watermark algorithm based on double transform domainsA note on error bounds for approximation in inner product spacesSome remarks on greedy algorithmsNonlinear approximation in finite-dimensional spacesThe convex geometry of linear inverse problemsComparison of the convergence rate of pure greedy and orthogonal greedy algorithmsRestricted polynomial regressionTraining Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation BenefitsEstimation of projection pursuit regression via alternating linearizationCharacterization of the variation spaces corresponding to shallow neural networksDegree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their DerivativesA survey on universal approximation and its limits in soft computing techniques.Complexity estimates based on integral transforms induced by computational unitsAccuracy of approximations of solutions to Fredholm equations by kernel methodsUnnamed ItemGreedy training algorithms for neural networks and applications to PDEsApproximation with neural networks activated by ramp sigmoidsCan dictionary-based computational models outperform the best linear ones?Vector greedy algorithmsApproximation by finite mixtures of continuous density functions that vanish at infinityMinimization of Error Functionals over Perceptron NetworksGreedy expansions with prescribed coefficients in Hilbert spacesOn \(n\)-term approximation with positive coefficientsLearning semidefinite regularizersApproximation Properties of Ridge Functions and Extreme Learning MachinesConvergence properties of cascade correlation in function approximation.On function recovery by neural networks based on orthogonal expansionsFinite Neuron Method and Convergence AnalysisApproximation and learning by greedy algorithmsSchwarz iterative methods: infinite space splittingsDeviation optimal learning using greedy \(Q\)-aggregationNew insights into Witsenhausen's counterexampleSome extensions of radial basis functions and their applications in artificial intelligenceRegularized vector field learning with sparse approximation for mismatch removalEstimates of variation with respect to a set and applications to optimization problemsApproximation of functions of finite variation by superpositions of a sigmoidal function.Some comparisons of complexity in dictionary-based and linear computational modelsA note on a scale-sensitive dimension of linear bounded functionals in Banach spacesSimultaneous greedy approximation in Banach spacesApproximation on anisotropic Besov classes with mixed norms by standard informationLearning with generalization capability by kernel methods of bounded complexityRegularized greedy algorithms for network training with data noiseRidge functions and orthonormal ridgeletsUnnamed ItemSimultaneous approximation by greedy algorithmsAn approximation result for nets in functional estimationUnnamed ItemApproximation with random bases: pro et contraGeometric Rates of Approximation by Neural NetworksApproximation by superpositions of a sigmoidal functionComplexity of Gaussian-radial-basis networks approximating smooth functionsSome problems in the theory of ridge functionsInsights into randomized algorithms for neural networks: practical issues and common pitfallsRisk bounds for mixture density estimationBoosting the margin: a new explanation for the effectiveness of voting methodsModels of knowing and the investigation of dynamical systemsApproximation schemes for functional optimization problemsA note on error bounds for function approximation using nonlinear networksAn Integral Upper Bound for Neural Network ApproximationConvergence analysis of convex incremental neural networksOn a greedy algorithm in the space \(L_p[0,1\)] ⋮ Harmonic analysis of neural networksGreedy algorithms and \(M\)-term approximation with regard to redundant dictionariesA better approximation for ballsInformation-theoretic determination of minimax rates of convergenceOn simultaneous approximations by radial basis function neural networksScalable Semidefinite ProgrammingJoint Inversion of Multiple ObservationsLearning a function from noisy samples at a finite sparse set of pointsFunctional aggregation for nonparametric regression.Local greedy approximation for nonlinear regression and neural network training.Unnamed ItemGeneralized approximate weak greedy algorithmsGreedy approximation in convex optimizationApproximation properties of local bases assembled from neural network transfer functionsBoosting with early stopping: convergence and consistencyRates of minimization of error functionals over Boolean variable-basis functionsGeneralization bounds for sparse random feature expansionsHigh-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functionsA New Function Space from Barron Class and Application to Neural Network ApproximationA mathematical perspective of machine learning







This page was built for publication: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training