Concentration estimates for learning with unbounded sampling

From MaRDI portal
Revision as of 17:03, 1 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1946480

DOI10.1007/s10444-011-9238-8zbMath1283.68289OpenAlexW2006650550MaRDI QIDQ1946480

Ding-Xuan Zhou, Zheng-Chu Guo

Publication date: 15 April 2013

Published in: Advances in Computational Mathematics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10444-011-9238-8




Related Items

Statistical consistency of coefficient-based conditional quantile regressionRegularized least square regression with unbounded and dependent samplingDeterministic error bounds for kernel-based learning techniques under bounded noiseLearning with coefficient-based regularization and \(\ell^1\)-penaltyDistributed learning with multi-penalty regularizationOn the convergence rate of kernel-based sequential greedy regressionLearning rates for regularized least squares ranking algorithmCoefficient-based regularized distribution regressionLearning with Convex Loss and Indefinite KernelsSupport vector machines regression with unbounded samplingQuantile regression with \(\ell_1\)-regularization and Gaussian kernelsUnified approach to coefficient-based regularized regressionConstructive analysis for least squares regression with generalized \(K\)-norm regularizationConstructive analysis for coefficient regularization regression algorithmsOptimal convergence rates of high order Parzen windows with unbounded samplingCoefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded samplingStatistical analysis of the moving least-squares method with unbounded samplingRegularized modal regression with data-dependent hypothesis spacesSystem identification using kernel-based regularization: new insights on stability and consistency issuesLearning with correntropy-induced losses for regression with mixture of symmetric stable noiseAnalysis of Regression Algorithms with Unbounded SamplingOptimal rates for coefficient-based regularized regressionUnnamed ItemBayesian frequentist bounds for machine learning and system identificationThresholded spectral algorithms for sparse approximations



Cites Work


This page was built for publication: Concentration estimates for learning with unbounded sampling