ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY

From MaRDI portal
Revision as of 07:12, 7 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:4474578


DOI10.1142/S0219530503000089zbMath1079.68089MaRDI QIDQ4474578

Ding-Xuan Zhou, Stephen Smale

Publication date: 12 July 2004

Published in: Analysis and Applications (Search for Journal in Brave)


68T05: Learning and adaptive systems in artificial intelligence

41A25: Rate of convergence, degree of approximation


Related Items

SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming, Shannon sampling and function reconstruction from point values, Sampling and Stability, Online Classification with Varying Gaussians, Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains, Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs, Estimation of convergence rate for multi-regression learning algorithm, Online learning for quantile regression and support vector regression, Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces, Mercer theorem for RKHS on noncompact sets, Optimal learning rates for least squares regularized regression with unbounded sampling, Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations, On complex-valued 2D eikonals. IV: continuation past a caustic, Regularized least-squares regression: learning from a sequence, Unified approach to coefficient-based regularized regression, Regularization in kernel learning, Multi-kernel regularized classifiers, Behavior of a functional in learning theory, Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels, Derivative reproducing properties for kernel methods in learning theory, Estimation of the misclassification error for multicategory support vector machine classification, Orthogonality from disjoint support in reproducing kernel Hilbert spaces, Learning and approximation by Gaussians on Riemannian manifolds, The convergence rate for a \(K\)-functional in learning theory, Fast rates for support vector machines using Gaussian kernels, A note on application of integral operator in learning theory, Learning rates of least-square regularized regression with polynomial kernels, Support vector machines regression with \(l^1\)-regularizer, Positive definite dot product kernels in learning theory, Computational complexity of the integration problem for anisotropic classes, The covering number in learning theory, Generalization errors of Laplacian regularized least squares regression, The generalization performance of ERM algorithm with strongly mixing observations, Fully online classification by regularization, Optimal shift invariant spaces and their Parseval frame generators, Learning with sample dependent hypothesis spaces, On approximation by reproducing kernel spaces in weighted \(L^p\) spaces, Ranking and empirical minimization of \(U\)-statistics, Convergence analysis of online algorithms, Approximation with polynomial kernels and SVM classifiers, Shannon sampling. II: Connections to learning theory, Rates of minimization of error functionals over Boolean variable-basis functions, Least-squares regularized regression with dependent samples andq-penalty, ERM learning algorithm for multi-class classification, Optimal rate of the regularized regression learning algorithm, Least Square Regression with lp-Coefficient Regularization, ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS, Estimates of learning rates of regularized regression via polyline functions, SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS, ONLINE LEARNING WITH MARKOV SAMPLING



Cites Work