Consistency analysis of an empirical minimum error entropy algorithm
From MaRDI portal
Publication:285539
DOI10.1016/j.acha.2014.12.005zbMath1382.94034arXiv1412.5272OpenAlexW2963604113MaRDI QIDQ285539
Ting Hu, Qiang Wu, Jun Fan, Ding-Xuan Zhou
Publication date: 19 May 2016
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1412.5272
Related Items
Online regularized learning with pairwise loss functions ⋮ Learning theory of minimum error entropy under weak moment conditions ⋮ A Statistical Learning Approach to Modal Regression ⋮ Online regularized pairwise learning with non-i.i.d. observations ⋮ On the robustness of regularized pairwise learning methods based on kernels ⋮ Toward recursive spherical harmonics issued bi-filters. II: An associated spherical harmonics entropy for optimal modeling ⋮ Deep distributed convolutional neural networks: Universality ⋮ Unnamed Item ⋮ Fast rates of minimum error entropy with heavy-tailed noise ⋮ Approximation on variable exponent spaces by linear integral operators ⋮ Distributed kernel gradient descent algorithm for minimum error entropy principle ⋮ Kernel-based sparse regression with the correntropy-induced loss ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Theory of deep convolutional neural networks: downsampling ⋮ Theory of deep convolutional neural networks. III: Approximating radial functions ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss ⋮ Supersmooth density estimations over \(L^p\) risk by wavelets ⋮ On meshfree numerical differentiation ⋮ Online minimum error entropy algorithm with unbounded sampling ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Unregularized online learning algorithms with general loss functions ⋮ Chebyshev type inequality for stochastic Bernstein polynomials ⋮ Robust kernel-based distribution regression ⋮ Learning Theory of Randomized Sparse Kaczmarz Method ⋮ Online pairwise learning algorithms with convex loss functions ⋮ Universality of deep convolutional neural networks ⋮ Learning with correntropy-induced losses for regression with mixture of symmetric stable noise ⋮ Online regularized pairwise learning with least squares loss ⋮ Theory of deep convolutional neural networks. II: Spherical analysis ⋮ On extension theorems and their connection to universal consistency in machine learning ⋮ Error bounds for learning the kernel ⋮ Learning under \((1 + \epsilon)\)-moment conditions ⋮ New Insights Into Learning With Correntropy-Based Regression ⋮ A Framework of Learning Through Empirical Gain Maximization ⋮ Optimal learning with Gaussians and correntropy loss ⋮ Unnamed Item
Cites Work
- Tails of Lévy measure of geometric stable random variables
- Blind source separation using Renyi's \(\alpha\)-marginal entropies.
- Local Rademacher complexities
- The MEE Principle in Data Classification: A Perceptron-Based Analysis
- On a Class of Unimodal Distributions
- 10.1162/153244303321897690
- Information Theoretic Learning
- Unnamed Item
- Unnamed Item
- Unnamed Item