Optimal learning rates for least squares regularized regression with unbounded sampling

From MaRDI portal
Revision as of 08:05, 30 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:617656

DOI10.1016/J.JCO.2010.10.002zbMath1217.65024OpenAlexW2125875378MaRDI QIDQ617656

Ding-Xuan Zhou, Cheng Wang

Publication date: 21 January 2011

Published in: Journal of Complexity (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/j.jco.2010.10.002




Related Items (39)

Online regression with unbounded samplingStatistical consistency of coefficient-based conditional quantile regressionConsistent identification of Wiener systems: a machine learning viewpointThe kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundaryLearning interaction kernels in stochastic systems of interacting particles from multiple trajectoriesRegularized least square regression with unbounded and dependent samplingIntegral operator approach to learning theory with unbounded samplingDeterministic error bounds for kernel-based learning techniques under bounded noiseError analysis on regularized regression based on the maximum correntropy criterionRegularized learning schemes in feature Banach spacesLearning rates for the kernel regularized regression with a differentiable strongly convex lossOn the K-functional in learning theoryConcentration estimates for learning with unbounded samplingNonasymptotic analysis of robust regression with modified Huber's lossSupport vector machines regression with unbounded samplingQuantile regression with \(\ell_1\)-regularization and Gaussian kernelsGeneralization ability of fractional polynomial modelsOnline minimum error entropy algorithm with unbounded samplingConstructive analysis for least squares regression with generalized \(K\)-norm regularizationConvergence rate of SVM for kernel-based robust regressionOptimal rates for regularization of statistical inverse learning problemsConstructive analysis for coefficient regularization regression algorithmsPerturbation of convex risk minimization and its application in differential private learning algorithmsOptimal convergence rates of high order Parzen windows with unbounded samplingCoefficient-based regression with non-identical unbounded samplingCoefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded samplingStatistical analysis of the moving least-squares method with unbounded samplingSystem identification using kernel-based regularization: new insights on stability and consistency issuesLearning with correntropy-induced losses for regression with mixture of symmetric stable noiseAnalysis of regularized least-squares in reproducing kernel Kreĭn spacesAnalysis of Regression Algorithms with Unbounded SamplingConvergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel LearningError bounds of the invariant statistics in machine learning of ergodic Itô diffusionsINDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLINGHalf supervised coefficient regularization for regression learning with unbounded samplingOptimal learning with Gaussians and correntropy lossUnnamed ItemBayesian frequentist bounds for machine learning and system identificationCONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION




Cites Work




This page was built for publication: Optimal learning rates for least squares regularized regression with unbounded sampling