Capacity of reproducing kernel spaces in learning theory
From MaRDI portal
Publication:3547887
DOI10.1109/TIT.2003.813564zbMath1290.62033OpenAlexW2112531253MaRDI QIDQ3547887
Publication date: 21 December 2008
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tit.2003.813564
Computational learning theory (68Q32) Nonparametric estimation (62G05) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Related Items (only showing first 100 items - show all)
\(n\)-best kernel approximation in reproducing kernel Hilbert spaces ⋮ A close look at the entropy numbers of the unit ball of the reproducing Hilbert space of isotropic positive definite kernels ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ Random sampling and reconstruction in multiply generated shift-invariant spaces ⋮ Generalized Mercer Kernels and Reproducing Kernel Banach Spaces ⋮ Variance-based regularization with convex objectives ⋮ Generalized representer theorems in Banach spaces ⋮ Statistical consistency of coefficient-based conditional quantile regression ⋮ Learning by atomic norm regularization with polynomial kernels ⋮ ERM learning algorithm for multi-class classification ⋮ Hermite learning with gradient data ⋮ On grouping effect of elastic net ⋮ Shannon sampling and function reconstruction from point values ⋮ Learning with sample dependent hypothesis spaces ⋮ Application of integral operator for regularized least-square regression ⋮ The convergence rates of Shannon sampling learning algorithms ⋮ Multi-kernel regularized classifiers ⋮ Integral operator approach to learning theory with unbounded sampling ⋮ An oracle inequality for regularized risk minimizers with strongly mixing observations ⋮ The learning rate of \(l_2\)-coefficient regularized classification with strong loss ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Modal additive models with data-driven structure identification ⋮ Kernel Methods for the Approximation of Nonlinear Systems ⋮ Optimal learning rates for least squares regularized regression with unbounded sampling ⋮ Generalization errors of Laplacian regularized least squares regression ⋮ Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations ⋮ Learning theory approach to a system identification problem involving atomic norm ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Approximation analysis of learning algorithms for support vector regression and quantile regression ⋮ An empirical feature-based learning algorithm producing sparse approximations ⋮ On approximation by reproducing kernel spaces in weighted \(L^p\) spaces ⋮ Statistical performance of support vector machines ⋮ On the regularized Laplacian eigenmaps ⋮ ERM learning with unbounded sampling ⋮ Learning from regularized regression algorithms with \(p\)-order Markov chain sampling ⋮ Learning from non-irreducible Markov chains ⋮ Concentration estimates for learning with unbounded sampling ⋮ Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains ⋮ Estimation of convergence rate for multi-regression learning algorithm ⋮ Semi-supervised learning with the help of Parzen windows ⋮ Error bounds for \(l^p\)-norm multiple kernel learning with least square loss ⋮ Regularized least-squares regression: learning from a sequence ⋮ The generalization performance of ERM algorithm with strongly mixing observations ⋮ Support vector machines regression with unbounded sampling ⋮ Generalization Analysis of Fredholm Kernel Regularized Classifiers ⋮ Constrained ERM Learning of Canonical Correlation Analysis: A Least Squares Perspective ⋮ Unified approach to coefficient-based regularized regression ⋮ Classification with non-i.i.d. sampling ⋮ A simpler approach to coefficient regularized support vector machines regression ⋮ Applications of the Bernstein-Durrmeyer operators in estimating the norm of Mercer kernel matrices ⋮ Constructive analysis for least squares regression with generalized \(K\)-norm regularization ⋮ Some Properties of Reproducing Kernel Banach and Hilbert Spaces ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Convergence rate of kernel canonical correlation analysis ⋮ Convergence analysis of online algorithms ⋮ Generalization performance of least-square regularized regression algorithm with Markov chain samples ⋮ Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels ⋮ Derivative reproducing properties for kernel methods in learning theory ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Estimation of the misclassification error for multicategory support vector machine classification ⋮ Distributed kernel-based gradient descent algorithms ⋮ Learning performance of regularized regression with multiscale kernels based on Markov observations ⋮ A closer look at covering number bounds for Gaussian kernels ⋮ Learning and approximation by Gaussians on Riemannian manifolds ⋮ Consistency of regularized spectral clustering ⋮ Logistic classification with varying gaussians ⋮ The covering number for some Mercer kernel Hilbert spaces ⋮ Learning from non-identical sampling for classification ⋮ Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces ⋮ The consistency of multicategory support vector machines ⋮ Covering numbers of Gaussian reproducing kernel Hilbert spaces ⋮ Mercer theorem for RKHS on noncompact sets ⋮ Coefficient-based regression with non-identical unbounded sampling ⋮ Least-square regularized regression with non-iid sampling ⋮ SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming ⋮ Approximation with polynomial kernels and SVM classifiers ⋮ Concentration estimates for the moving least-square method in learning theory ⋮ Tight framelets and fast framelet filter bank transforms on manifolds ⋮ Semi-supervised learning based on high density region estimation ⋮ Statistical analysis of the moving least-squares method with unbounded sampling ⋮ Least Square Regression with lp-Coefficient Regularization ⋮ Error analysis of multicategory support vector machine classifiers ⋮ Applications of Bernstein-Durrmeyer operators in estimating the covering number ⋮ ONLINE LEARNING WITH MARKOV SAMPLING ⋮ Theory of Classification: a Survey of Some Recent Advances ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY, ALGORITHMIC STABILITY AND DATA QUALITY ⋮ Analysis of support vector machines regression ⋮ Learning from uniformly ergodic Markov chains ⋮ SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS ⋮ Asymptotic normality of support vector machine variants and other regularized kernel methods ⋮ Density problem and approximation error in learning theory ⋮ Deep neural networks for rotation-invariance approximation and learning ⋮ Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression ⋮ Estimates of the norm of the Mercer kernel matrices with discrete orthogonal transforms ⋮ Estimates of learning rates of regularized regression via polyline functions ⋮ Generalization performance of graph-based semi-supervised classification ⋮ Optimal rate for support vector machine regression with Markov chain samples ⋮ Gradient learning in a classification setting by gradient descent
This page was built for publication: Capacity of reproducing kernel spaces in learning theory