Optimal learning rates for least squares regularized regression with unbounded sampling
From MaRDI portal
Publication:617656
DOI10.1016/j.jco.2010.10.002zbMath1217.65024OpenAlexW2125875378MaRDI QIDQ617656
Publication date: 21 January 2011
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jco.2010.10.002
learning algorithmsGaussian noisecovering numberleast squares regressionregularization in reproducing kernel Hilbert spaces
Lua error in Module:PublicationMSCList at line 37: attempt to index local 'msc_result' (a nil value).
Related Items (39)
Online regression with unbounded sampling ⋮ Statistical consistency of coefficient-based conditional quantile regression ⋮ Consistent identification of Wiener systems: a machine learning viewpoint ⋮ The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary ⋮ Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories ⋮ Regularized least square regression with unbounded and dependent sampling ⋮ Integral operator approach to learning theory with unbounded sampling ⋮ Deterministic error bounds for kernel-based learning techniques under bounded noise ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Regularized learning schemes in feature Banach spaces ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ On the K-functional in learning theory ⋮ Concentration estimates for learning with unbounded sampling ⋮ Nonasymptotic analysis of robust regression with modified Huber's loss ⋮ Support vector machines regression with unbounded sampling ⋮ Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮ Generalization ability of fractional polynomial models ⋮ Online minimum error entropy algorithm with unbounded sampling ⋮ Constructive analysis for least squares regression with generalized \(K\)-norm regularization ⋮ Convergence rate of SVM for kernel-based robust regression ⋮ Optimal rates for regularization of statistical inverse learning problems ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Perturbation of convex risk minimization and its application in differential private learning algorithms ⋮ Optimal convergence rates of high order Parzen windows with unbounded sampling ⋮ Coefficient-based regression with non-identical unbounded sampling ⋮ Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling ⋮ Statistical analysis of the moving least-squares method with unbounded sampling ⋮ System identification using kernel-based regularization: new insights on stability and consistency issues ⋮ Learning with correntropy-induced losses for regression with mixture of symmetric stable noise ⋮ Analysis of regularized least-squares in reproducing kernel Kreĭn spaces ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning ⋮ Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions ⋮ INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING ⋮ Half supervised coefficient regularization for regression learning with unbounded sampling ⋮ Optimal learning with Gaussians and correntropy loss ⋮ Unnamed Item ⋮ Bayesian frequentist bounds for machine learning and system identification ⋮ CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION
Cites Work
- Unnamed Item
- Unnamed Item
- Model selection for regularized least-squares algorithm in learning theory
- Regularization in kernel learning
- Multi-kernel regularized classifiers
- Derivative reproducing properties for kernel methods in learning theory
- Optimal rates for the regularized least-squares algorithm
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Probability Inequalities for the Sum of Independent Random Variables
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Support Vector Machines
- Capacity of reproducing kernel spaces in learning theory
- A new concentration result for regularized risk minimizers
- ONLINE LEARNING WITH MARKOV SAMPLING
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Leave-One-Out Bounds for Kernel Methods
- Learning Theory
This page was built for publication: Optimal learning rates for least squares regularized regression with unbounded sampling