Concentration estimates for learning with unbounded sampling
From MaRDI portal
Publication:1946480
DOI10.1007/s10444-011-9238-8zbMath1283.68289MaRDI QIDQ1946480
Publication date: 15 April 2013
Published in: Advances in Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10444-011-9238-8
learning theory; regularization in reproducing kernel Hilbert spaces; least-square regression; concentration estimates; empirical covering number
Related Items
Regularized modal regression with data-dependent hypothesis spaces, Thresholded spectral algorithms for sparse approximations, Learning rates for regularized least squares ranking algorithm, Learning with Convex Loss and Indefinite Kernels, Support vector machines regression with unbounded sampling, Statistical consistency of coefficient-based conditional quantile regression, Regularized least square regression with unbounded and dependent sampling, Learning with coefficient-based regularization and \(\ell^1\)-penalty, Quantile regression with \(\ell_1\)-regularization and Gaussian kernels, Constructive analysis for coefficient regularization regression algorithms, Unified approach to coefficient-based regularized regression, Statistical analysis of the moving least-squares method with unbounded sampling, Constructive analysis for least squares regression with generalized \(K\)-norm regularization, Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling, System identification using kernel-based regularization: new insights on stability and consistency issues, On the convergence rate of kernel-based sequential greedy regression, Optimal convergence rates of high order Parzen windows with unbounded sampling, Learning with correntropy-induced losses for regression with mixture of symmetric stable noise, Optimal rates for coefficient-based regularized regression, Distributed learning with multi-penalty regularization
Cites Work
- Unnamed Item
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Model selection for regularized least-squares algorithm in learning theory
- Regularization in kernel learning
- Multi-kernel regularized classifiers
- Derivative reproducing properties for kernel methods in learning theory
- A note on application of integral operator in learning theory
- Elastic-net regularization in learning theory
- A note on different covering numbers in learning theory.
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Compactly supported positive definite radial functions
- Weak convergence and empirical processes. With applications to statistics
- Least-square regularized regression with non-iid sampling
- Optimal rates for the regularized least-squares algorithm
- Learning with sample dependent hypothesis spaces
- Learning rates of regularized regression for exponentially strongly mixing sequence
- Approximation in learning theory
- Learning rates of least-square regularized regression
- Complexities of convex combinations and bounding the generalization error in classification
- Local Rademacher complexities
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Probability Inequalities for the Sum of Independent Random Variables
- Capacity of reproducing kernel spaces in learning theory
- ONLINE LEARNING WITH MARKOV SAMPLING
- Leave-One-Out Bounds for Kernel Methods
- Neural Network Learning
- Learning Theory
- Theory of Reproducing Kernels