Concentration estimates for learning with unbounded sampling
From MaRDI portal
Publication:1946480
DOI10.1007/S10444-011-9238-8zbMath1283.68289OpenAlexW2006650550MaRDI QIDQ1946480
Publication date: 15 April 2013
Published in: Advances in Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10444-011-9238-8
learning theoryregularization in reproducing kernel Hilbert spacesleast-square regressionconcentration estimatesempirical covering number
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (25)
Statistical consistency of coefficient-based conditional quantile regression ⋮ Regularized least square regression with unbounded and dependent sampling ⋮ Deterministic error bounds for kernel-based learning techniques under bounded noise ⋮ Learning with coefficient-based regularization and \(\ell^1\)-penalty ⋮ Distributed learning with multi-penalty regularization ⋮ On the convergence rate of kernel-based sequential greedy regression ⋮ Learning rates for regularized least squares ranking algorithm ⋮ Coefficient-based regularized distribution regression ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ Support vector machines regression with unbounded sampling ⋮ Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮ Unified approach to coefficient-based regularized regression ⋮ Constructive analysis for least squares regression with generalized \(K\)-norm regularization ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Optimal convergence rates of high order Parzen windows with unbounded sampling ⋮ Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling ⋮ Statistical analysis of the moving least-squares method with unbounded sampling ⋮ Regularized modal regression with data-dependent hypothesis spaces ⋮ System identification using kernel-based regularization: new insights on stability and consistency issues ⋮ Learning with correntropy-induced losses for regression with mixture of symmetric stable noise ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ Optimal rates for coefficient-based regularized regression ⋮ Unnamed Item ⋮ Bayesian frequentist bounds for machine learning and system identification ⋮ Thresholded spectral algorithms for sparse approximations
Cites Work
- Unnamed Item
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Model selection for regularized least-squares algorithm in learning theory
- Regularization in kernel learning
- Multi-kernel regularized classifiers
- Derivative reproducing properties for kernel methods in learning theory
- A note on application of integral operator in learning theory
- Elastic-net regularization in learning theory
- A note on different covering numbers in learning theory.
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Compactly supported positive definite radial functions
- Weak convergence and empirical processes. With applications to statistics
- Least-square regularized regression with non-iid sampling
- Optimal rates for the regularized least-squares algorithm
- Learning with sample dependent hypothesis spaces
- Learning rates of regularized regression for exponentially strongly mixing sequence
- Approximation in learning theory
- Learning rates of least-square regularized regression
- Complexities of convex combinations and bounding the generalization error in classification
- Local Rademacher complexities
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Probability Inequalities for the Sum of Independent Random Variables
- Capacity of reproducing kernel spaces in learning theory
- ONLINE LEARNING WITH MARKOV SAMPLING
- Leave-One-Out Bounds for Kernel Methods
- Neural Network Learning
- Learning Theory
- Theory of Reproducing Kernels
This page was built for publication: Concentration estimates for learning with unbounded sampling