ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
From MaRDI portal
Publication:4474578
DOI10.1142/S0219530503000089zbMath1079.68089MaRDI QIDQ4474578
Publication date: 12 July 2004
Published in: Analysis and Applications (Search for Journal in Brave)
interpolation spacereproducing kernel Hilbert spaceapproximation errorlogarithmic rate of convergenceLearning theorykernel machine learning
Learning and adaptive systems in artificial intelligence (68T05) Rate of convergence, degree of approximation (41A25)
Related Items (only showing first 100 items - show all)
Coupled Generation ⋮ Kernel Methods for the Approximation of Nonlinear Systems ⋮ Convergence analysis for kernel-regularized online regression associated with an RRKHS ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss ⋮ Optimality of regularized least squares ranking with imperfect kernels ⋮ Inverse learning in Hilbert scales ⋮ Coefficient-based regularized distribution regression ⋮ Nonlinear Tikhonov regularization in Hilbert scales for inverse learning ⋮ Support vector machines regression with unbounded sampling ⋮ Online Classification with Varying Gaussians ⋮ The covering number in learning theory ⋮ Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces ⋮ Error analysis on Hérmite learning with gradient data ⋮ The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary ⋮ Least-squares regularized regression with dependent samples andq-penalty ⋮ Multi-penalty regularization in learning theory ⋮ Regularization in kernel learning ⋮ ERM learning algorithm for multi-class classification ⋮ Fully online classification by regularization ⋮ Optimal shift invariant spaces and their Parseval frame generators ⋮ On grouping effect of elastic net ⋮ Shannon sampling and function reconstruction from point values ⋮ Learning with sample dependent hypothesis spaces ⋮ Multi-kernel regularized classifiers ⋮ An efficient kernel learning algorithm for semisupervised regression problems ⋮ Gradient descent for robust kernel-based regression ⋮ Learning rates of regularized regression on the unit sphere ⋮ ERM scheme for quantile regression ⋮ The learning rate of \(l_2\)-coefficient regularized classification with strong loss ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Summation of Gaussian shifts as Jacobi's third theta function ⋮ Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels ⋮ Echo state networks are universal ⋮ Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric ⋮ Optimal learning rates for least squares regularized regression with unbounded sampling ⋮ Generalization errors of Laplacian regularized least squares regression ⋮ Generalization bounds of ERM algorithm with Markov chain samples ⋮ Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Convergence of online pairwise regression learning with quadratic loss ⋮ Kernel-based maximum correntropy criterion with gradient descent method ⋮ On the K-functional in learning theory ⋮ On approximation by reproducing kernel spaces in weighted \(L^p\) spaces ⋮ On complex-valued 2D eikonals. IV: continuation past a caustic ⋮ Ranking and empirical minimization of \(U\)-statistics ⋮ ERM learning with unbounded sampling ⋮ Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains ⋮ Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs ⋮ Optimal regression rates for SVMs using Gaussian kernels ⋮ Estimation of convergence rate for multi-regression learning algorithm ⋮ Penalized empirical risk minimization over Besov spaces ⋮ Conditional quantiles with varying Gaussians ⋮ Online learning for quantile regression and support vector regression ⋮ Regularized least-squares regression: learning from a sequence ⋮ Bias corrected regularization kernel method in ranking ⋮ The generalization performance of ERM algorithm with strongly mixing observations ⋮ Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮ Unified approach to coefficient-based regularized regression ⋮ A simpler approach to coefficient regularized support vector machines regression ⋮ Constructive analysis for least squares regression with generalized \(K\)-norm regularization ⋮ Convergence rate of SVM for kernel-based robust regression ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Convergence analysis of online algorithms ⋮ Behavior of a functional in learning theory ⋮ Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels ⋮ Derivative reproducing properties for kernel methods in learning theory ⋮ Estimation of the misclassification error for multicategory support vector machine classification ⋮ The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs ⋮ Optimal rate of the regularized regression learning algorithm ⋮ Statistical performance of optimal scoring in reproducing kernel Hilbert spaces ⋮ Orthogonality from disjoint support in reproducing kernel Hilbert spaces ⋮ Regularized kernel-based reconstruction in generalized Besov spaces ⋮ A closer look at covering number bounds for Gaussian kernels ⋮ Learning and approximation by Gaussians on Riemannian manifolds ⋮ The convergence rate for a \(K\)-functional in learning theory ⋮ Support vector machines regression with \(l^1\)-regularizer ⋮ Positive definite dot product kernels in learning theory ⋮ Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces ⋮ Mercer theorem for RKHS on noncompact sets ⋮ Stability analysis of learning algorithms for ontology similarity computation ⋮ Computational complexity of the integration problem for anisotropic classes ⋮ SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming ⋮ Approximation with polynomial kernels and SVM classifiers ⋮ Sampling and Stability ⋮ Fast rates for support vector machines using Gaussian kernels ⋮ Unregularized online algorithms with varying Gaussians ⋮ Least Square Regression with lp-Coefficient Regularization ⋮ Distributed regularized least squares with flexible Gaussian kernels ⋮ ONLINE LEARNING WITH MARKOV SAMPLING ⋮ A note on application of integral operator in learning theory ⋮ Nyström subsampling method for coefficient-based regularized regression ⋮ Performance analysis of the LapRSSLG algorithm in learning theory ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ Fast and strong convergence of online learning algorithms ⋮ Error Estimates for Multivariate Regression on Discretized Function Spaces ⋮ SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS ⋮ ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS ⋮ Density problem and approximation error in learning theory ⋮ Learning rates of regression with q-norm loss and threshold ⋮ Error bounds for learning the kernel
Cites Work
- Unnamed Item
- Unnamed Item
- Bounds on multivariate polynomials and exponential error estimates for multiquadric interpolation
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Regularization networks and support vector machines
- Local error estimates for radial basis function interpolation of scattered data
This page was built for publication: ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY