Consistency analysis of an empirical minimum error entropy algorithm

From MaRDI portal
Publication:285539

DOI10.1016/j.acha.2014.12.005zbMath1382.94034arXiv1412.5272OpenAlexW2963604113MaRDI QIDQ285539

Ting Hu, Qiang Wu, Jun Fan, Ding-Xuan Zhou

Publication date: 19 May 2016

Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1412.5272




Related Items

Online regularized learning with pairwise loss functionsLearning theory of minimum error entropy under weak moment conditionsA Statistical Learning Approach to Modal RegressionOnline regularized pairwise learning with non-i.i.d. observationsOn the robustness of regularized pairwise learning methods based on kernelsToward recursive spherical harmonics issued bi-filters. II: An associated spherical harmonics entropy for optimal modelingDeep distributed convolutional neural networks: UniversalityUnnamed ItemFast rates of minimum error entropy with heavy-tailed noiseApproximation on variable exponent spaces by linear integral operatorsDistributed kernel gradient descent algorithm for minimum error entropy principleKernel-based sparse regression with the correntropy-induced lossError analysis on regularized regression based on the maximum correntropy criterionTheory of deep convolutional neural networks: downsamplingTheory of deep convolutional neural networks. III: Approximating radial functionsError analysis of kernel regularized pairwise learning with a strongly convex lossSupersmooth density estimations over \(L^p\) risk by waveletsOn meshfree numerical differentiationOnline minimum error entropy algorithm with unbounded samplingKernel gradient descent algorithm for information theoretic learningUnregularized online learning algorithms with general loss functionsChebyshev type inequality for stochastic Bernstein polynomialsRobust kernel-based distribution regressionLearning Theory of Randomized Sparse Kaczmarz MethodOnline pairwise learning algorithms with convex loss functionsUniversality of deep convolutional neural networksLearning with correntropy-induced losses for regression with mixture of symmetric stable noiseOnline regularized pairwise learning with least squares lossTheory of deep convolutional neural networks. II: Spherical analysisOn extension theorems and their connection to universal consistency in machine learningError bounds for learning the kernelLearning under \((1 + \epsilon)\)-moment conditionsNew Insights Into Learning With Correntropy-Based RegressionA Framework of Learning Through Empirical Gain MaximizationOptimal learning with Gaussians and correntropy lossUnnamed Item



Cites Work