ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY

From MaRDI portal
Publication:4474578


DOI10.1142/S0219530503000089zbMath1079.68089MaRDI QIDQ4474578

Ding-Xuan Zhou, Stephen Smale

Publication date: 12 July 2004

Published in: Analysis and Applications (Search for Journal in Brave)


68T05: Learning and adaptive systems in artificial intelligence

41A25: Rate of convergence, degree of approximation


Related Items

SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming, Shannon sampling and function reconstruction from point values, Sampling and Stability, Online Classification with Varying Gaussians, Mercer theorem for RKHS on noncompact sets, Optimal learning rates for least squares regularized regression with unbounded sampling, Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations, Regularization in kernel learning, Multi-kernel regularized classifiers, Behavior of a functional in learning theory, Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels, Derivative reproducing properties for kernel methods in learning theory, Estimation of the misclassification error for multicategory support vector machine classification, Orthogonality from disjoint support in reproducing kernel Hilbert spaces, Learning and approximation by Gaussians on Riemannian manifolds, The convergence rate for a \(K\)-functional in learning theory, Fast rates for support vector machines using Gaussian kernels, A note on application of integral operator in learning theory, Learning rates of least-square regularized regression with polynomial kernels, Positive definite dot product kernels in learning theory, Computational complexity of the integration problem for anisotropic classes, The covering number in learning theory, The generalization performance of ERM algorithm with strongly mixing observations, Fully online classification by regularization, Optimal shift invariant spaces and their Parseval frame generators, Learning with sample dependent hypothesis spaces, On approximation by reproducing kernel spaces in weighted \(L^p\) spaces, Ranking and empirical minimization of \(U\)-statistics, Convergence analysis of online algorithms, Approximation with polynomial kernels and SVM classifiers, Shannon sampling. II: Connections to learning theory, Rates of minimization of error functionals over Boolean variable-basis functions, Least Square Regression with lp-Coefficient Regularization, SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS, ONLINE LEARNING WITH MARKOV SAMPLING



Cites Work