The covering number in learning theory
From MaRDI portal
Publication:1872632
DOI10.1006/jcom.2002.0635zbMath1016.68044OpenAlexW1968436459MaRDI QIDQ1872632
Publication date: 14 May 2003
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/20c935d7afb63933210a4476ef0fa09b92d8f7fe
Related Items
Shannon sampling and function reconstruction from point values ⋮ Unnamed Item ⋮ Are Loss Functions All the Same? ⋮ Learning theory of distributed spectral algorithms ⋮ A Reproducing Kernel Hilbert Space Approach to Functional Calibration of Computer Models ⋮ High-probability generalization bounds for pointwise uniformly stable algorithms ⋮ Learning rates for regularized least squares ranking algorithm ⋮ Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem ⋮ Support vector machines regression with unbounded sampling ⋮ Generalization Analysis of Fredholm Kernel Regularized Classifiers ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ Random sampling and reconstruction in multiply generated shift-invariant spaces ⋮ Coefficient-based regularization network with variance loss for error ⋮ Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel ⋮ SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming ⋮ On Reject and Refine Options in Multicategory Classification ⋮ Performance analysis of the LapRSSLG algorithm in learning theory ⋮ Non parametric learning approach to estimate conditional quantiles in the dependent functional data case ⋮ Deep neural networks for rotation-invariance approximation and learning ⋮ Reproducing kernel Hilbert spaces on manifolds: Sobolev and diffusion spaces ⋮ Optimal rate for support vector machine regression with Markov chain samples ⋮ Online Classification with Varying Gaussians ⋮ Error analysis of the kernel regularized regression based on refined convex losses and RKBSs ⋮ Analysis of k-partite ranking algorithm in area under the receiver operating characteristic curve criterion ⋮ Reproducing properties of differentiable Mercer-like kernels ⋮ Statistical consistency of coefficient-based conditional quantile regression ⋮ Least-squares regularized regression with dependent samples andq-penalty ⋮ Regularization in kernel learning ⋮ Hermite learning with gradient data ⋮ Estimates of covering numbers of convex sets with slowly decaying orthogonal subsets ⋮ Learning with sample dependent hypothesis spaces ⋮ Application of integral operator for regularized least-square regression ⋮ The convergence rates of Shannon sampling learning algorithms ⋮ Distributed learning via filtered hyperinterpolation on manifolds ⋮ Fast rates of minimum error entropy with heavy-tailed noise ⋮ Multi-kernel regularized classifiers ⋮ Modelling functional additive quantile regression using support vector machines approach ⋮ ERM scheme for quantile regression ⋮ The learning rate of \(l_2\)-coefficient regularized classification with strong loss ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Modal additive models with data-driven structure identification ⋮ Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels ⋮ An efficient model-free estimation of multiclass conditional probability ⋮ Generalization errors of Laplacian regularized least squares regression ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Approximation analysis of learning algorithms for support vector regression and quantile regression ⋮ On approximation by reproducing kernel spaces in weighted \(L^p\) spaces ⋮ Consistency of spectral clustering ⋮ On the regularized Laplacian eigenmaps ⋮ ERM learning with unbounded sampling ⋮ Learning from regularized regression algorithms with \(p\)-order Markov chain sampling ⋮ Nonparametric distributed learning under general designs ⋮ Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs ⋮ Approximations of semicontinuous functions with applications to stochastic optimization and statistical estimation ⋮ Estimation of convergence rate for multi-regression learning algorithm ⋮ Convergence rate and Bahadur type representation of general smoothing spline M-estimates ⋮ Conditional quantiles with varying Gaussians ⋮ Weak consistency of the support vector machine quantile regression approach when covariates are functions ⋮ Regularized least-squares regression: learning from a sequence ⋮ Generalization ability of fractional polynomial models ⋮ Unified approach to coefficient-based regularized regression ⋮ Convolution random sampling in multiply generated shift-invariant spaces of \(L^p(\mathbb{R}^d)\) ⋮ Classification with non-i.i.d. sampling ⋮ Query-dependent ranking and its asymptotic properties ⋮ Applications of the Bernstein-Durrmeyer operators in estimating the norm of Mercer kernel matrices ⋮ Constructive analysis for least squares regression with generalized \(K\)-norm regularization ⋮ Entropy numbers of finite-dimensional embeddings ⋮ Random sampling in multiply generated shift-invariant subspaces of mixed Lebesgue spaces \(L^{p,q}(\mathbb{R}\times\mathbb{R}^d)\) ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Convergence analysis of online algorithms ⋮ Generalization performance of least-square regularized regression algorithm with Markov chain samples ⋮ Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels ⋮ Derivative reproducing properties for kernel methods in learning theory ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Estimation of the misclassification error for multicategory support vector machine classification ⋮ Optimal rate of the regularized regression learning algorithm ⋮ Learning performance of regularized regression with multiscale kernels based on Markov observations ⋮ Regularized kernel-based reconstruction in generalized Besov spaces ⋮ Parzen windows for multi-class classification ⋮ A closer look at covering number bounds for Gaussian kernels ⋮ Learning and approximation by Gaussians on Riemannian manifolds ⋮ Random sampling in shift invariant spaces ⋮ Consistency of regularized spectral clustering ⋮ Logistic classification with varying gaussians ⋮ The covering number for some Mercer kernel Hilbert spaces ⋮ Learning from non-identical sampling for classification ⋮ Positive definite dot product kernels in learning theory ⋮ Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces ⋮ Learning errors of linear programming support vector regression ⋮ The consistency of multicategory support vector machines ⋮ Covering numbers of Gaussian reproducing kernel Hilbert spaces ⋮ Mercer theorem for RKHS on noncompact sets ⋮ Computational complexity of the integration problem for anisotropic classes ⋮ Nonparametric nonlinear regression using polynomial and neural approximators: a numerical comparison ⋮ Least-square regularized regression with non-iid sampling ⋮ Unnamed Item ⋮ Approximation with polynomial kernels and SVM classifiers ⋮ Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization ⋮ Concentration estimates for the moving least-square method in learning theory ⋮ Semi-supervised learning based on high density region estimation ⋮ A local Vapnik-Chervonenkis complexity ⋮ Statistical analysis of the moving least-squares method with unbounded sampling ⋮ Distributed regularized least squares with flexible Gaussian kernels ⋮ Oracle inequalities for support vector machines that are based on random entropy numbers ⋮ Error analysis of multicategory support vector machine classifiers ⋮ Applications of Bernstein-Durrmeyer operators in estimating the covering number ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ Fast and strong convergence of online learning algorithms ⋮ Consistent online Gaussian process regression without the sample complexity bottleneck ⋮ Convergence rates of generalization errors for margin-based classification ⋮ Analysis of support vector machines regression ⋮ SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS ⋮ The learning rates of regularized regression based on reproducing kernel Banach spaces ⋮ Density problem and approximation error in learning theory ⋮ Interpretable machine learning: fundamental principles and 10 grand challenges ⋮ Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions ⋮ Learning rates of least-square regularized regression with polynomial kernels ⋮ Estimates of the norm of the Mercer kernel matrices with discrete orthogonal transforms ⋮ Estimates of learning rates of regularized regression via polyline functions ⋮ Gradient learning in a classification setting by gradient descent ⋮ On the speed of uniform convergence in Mercer's theorem ⋮ Unnamed Item ⋮ Extreme learning machine for ranking: generalization analysis and applications ⋮ Unnamed Item
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Regularization networks and support vector machines
- On the mathematical foundations of learning
- Local error estimates for radial basis function interpolation of scattered data
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Theory of Reproducing Kernels
This page was built for publication: The covering number in learning theory