Leave-One-Out Bounds for Kernel Methods
From MaRDI portal
Recommendations
Cites work
- 10.1162/153244302760200704
- An Efron-Stein inequality for nonsymmetric statistics
- Approximation properties of zonal function networks using scattered data on the sphere
- Error estimates for interpolation by compactly supported radial basis functions of minimal degree
- Information-theoretic determination of minimax rates of convergence
- On the dual formulation of regularized linear systems with convex risks
- On the mathematical foundations of learning
- Rates of convergence for minimum contrast estimators
- Regularization networks and support vector machines
- Stability results for scattered-data interpolation on Euclidean spheres
- The jackknife estimate of variance
Cited in
(37)- Indefinite kernel network with \(l^q\)-norm regularization
- Kernel gradient descent algorithm for information theoretic learning
- scientific article; zbMATH DE number 7415114 (Why is no real title available?)
- ERM learning with unbounded sampling
- Least-square regularized regression with non-iid sampling
- Hermite learning with gradient data
- Consistency and generalization bounds for maximum entropy density estimation
- Reproducing kernel Banach spaces with the \(\ell^{1}\) norm. II: Error analysis for regularized least square regression
- Consistency analysis of spectral regularization algorithms
- Least-squares regularized regression with dependent samples and \(q\)-penalty
- Learning with coefficient-based regularization and ^1-penalty
- Thresholded spectral algorithms for sparse approximations
- scientific article; zbMATH DE number 1928677 (Why is no real title available?)
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Least square regression with indefinite kernels and coefficient regularization
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Half supervised coefficient regularization for regression learning with unbounded sampling
- Indefinite kernel network with dependent sampling
- ERM learning algorithm for multi-class classification
- Leave one support vector out cross validation for fast estimation of generalization errors
- Coefficient regularized regression with non-iid sampling
- Sparse kernel regression with coefficient-based \(\ell_q\)-regularization
- Concentration estimates for learning with unbounded sampling
- scientific article; zbMATH DE number 1843054 (Why is no real title available?)
- Pairwise learning problems with regularization networks and Nyström subsampling approach
- Learning with sample dependent hypothesis spaces
- Shannon sampling. II: Connections to learning theory
- Sharp learning rates of coefficient-based l^q-regularized regression with indefinite kernels
- Integral operator approach to learning theory with unbounded sampling
- scientific article; zbMATH DE number 1928700 (Why is no real title available?)
- scientific article; zbMATH DE number 1804116 (Why is no real title available?)
- Regularized least square regression with dependent samples
- Learning theory of distributed regression with bias corrected regularization kernel network
- Robust pairwise learning with Huber loss
- A note on application of integral operator in learning theory
- Optimality of regularized least squares ranking with imperfect kernels
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
This page was built for publication: Leave-One-Out Bounds for Kernel Methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4816853)