Optimal learning rates for least squares regularized regression with unbounded sampling

From MaRDI portal
Publication:617656


DOI10.1016/j.jco.2010.10.002zbMath1217.65024MaRDI QIDQ617656

Ding-Xuan Zhou, Cheng Wang

Publication date: 21 January 2011

Published in: Journal of Complexity (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/j.jco.2010.10.002


62J05: Linear regression; mixed models

68T05: Learning and adaptive systems in artificial intelligence

46E22: Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces)


Related Items

Regularized learning schemes in feature Banach spaces, Convergence rate of SVM for kernel-based robust regression, Unnamed Item, The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary, On the K-functional in learning theory, Support vector machines regression with unbounded sampling, Online minimum error entropy algorithm with unbounded sampling, Optimal learning with Gaussians and correntropy loss, Statistical consistency of coefficient-based conditional quantile regression, Regularized least square regression with unbounded and dependent sampling, Integral operator approach to learning theory with unbounded sampling, Quantile regression with \(\ell_1\)-regularization and Gaussian kernels, Generalization ability of fractional polynomial models, Constructive analysis for coefficient regularization regression algorithms, Perturbation of convex risk minimization and its application in differential private learning algorithms, Optimal rates for regularization of statistical inverse learning problems, Statistical analysis of the moving least-squares method with unbounded sampling, Constructive analysis for least squares regression with generalized \(K\)-norm regularization, Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling, System identification using kernel-based regularization: new insights on stability and consistency issues, Concentration estimates for learning with unbounded sampling, Coefficient-based regression with non-identical unbounded sampling, Analysis of regularized least-squares in reproducing kernel Kreĭn spaces, Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions, Bayesian frequentist bounds for machine learning and system identification, Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories, Learning rates for the kernel regularized regression with a differentiable strongly convex loss, Optimal convergence rates of high order Parzen windows with unbounded sampling, Learning with correntropy-induced losses for regression with mixture of symmetric stable noise, Consistent identification of Wiener systems: a machine learning viewpoint, Deterministic error bounds for kernel-based learning techniques under bounded noise, Error analysis on regularized regression based on the maximum correntropy criterion, Nonasymptotic analysis of robust regression with modified Huber's loss, INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING, Half supervised coefficient regularization for regression learning with unbounded sampling, CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION, Online regression with unbounded sampling, Analysis of Regression Algorithms with Unbounded Sampling, Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning



Cites Work