Consistency analysis of an empirical minimum error entropy algorithm
From MaRDI portal
Publication:285539
DOI10.1016/J.ACHA.2014.12.005zbMATH Open1382.94034arXiv1412.5272OpenAlexW2963604113MaRDI QIDQ285539FDOQ285539
Authors: Jun Fan, Ting Hu, Qiang Wu, Ding-Xuan Zhou
Publication date: 19 May 2016
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Abstract: In this paper we study the consistency of an empirical minimum error entropy (MEE) algorithm in a regression setting. We introduce two types of consistency. The error entropy consistency, which requires the error entropy of the learned function to approximate the minimum error entropy, is shown to be always true if the bandwidth parameter tends to 0 at an appropriate rate. The regression consistency, which requires the learned function to approximate the regression function, however, is a complicated issue. We prove that the error entropy consistency implies the regression consistency for homoskedastic models where the noise is independent of the input variable. But for heteroskedastic models, a counterexample is used to show that the two types of consistency do not coincide. A surprising result is that the regression consistency is always true, provided that the bandwidth parameter tends to infinity at an appropriate rate. Regression consistency of two classes of special models is shown to hold with fixed bandwidth parameter, which further illustrates the complexity of regression consistency of MEE. Fourier transform plays crucial roles in our analysis.
Full work available at URL: https://arxiv.org/abs/1412.5272
Recommendations
- Regularization schemes for minimum error entropy principle
- Learning theory approach to minimum error entropy criterion
- Fast rates of minimum error entropy with heavy-tailed noise
- On optimal estimations with minimum error entropy criterion
- Online minimum error entropy algorithm with unbounded sampling
Cites Work
- Title not available (Why is that?)
- Local Rademacher complexities
- 10.1162/153244303321897690
- Information theoretic learning. Renyi's entropy and kernel perspectives
- Title not available (Why is that?)
- Tails of Lévy measure of geometric stable random variables
- Blind source separation using Renyi's \(\alpha\)-marginal entropies.
- The MEE principle in data classification: a perceptron-based analysis
- On a Class of Unimodal Distributions
- Learning theory approach to minimum error entropy criterion
Cited In (41)
- Kernel-based sparse regression with the correntropy-induced loss
- Learning with correntropy-induced losses for regression with mixture of symmetric stable noise
- Universality of deep convolutional neural networks
- Online pairwise learning algorithms with convex loss functions
- Online regularized learning with pairwise loss functions
- Learning theory of randomized sparse Kaczmarz method
- On the convergence of gradient descent for robust functional linear regression
- Kernel gradient descent algorithm for information theoretic learning
- On meshfree numerical differentiation
- Supersmooth density estimations over \(L^p\) risk by wavelets
- Online minimum error entropy algorithm with unbounded sampling
- Robust kernel-based distribution regression
- Distributed learning with regularized least squares
- Error analysis on regularized regression based on the maximum correntropy criterion
- On extension theorems and their connection to universal consistency in machine learning
- Theory of deep convolutional neural networks. III: Approximating radial functions
- Distributed minimum error entropy algorithms
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Toward recursive spherical harmonics issued bi-filters. II: An associated spherical harmonics entropy for optimal modeling
- A Statistical Learning Approach to Modal Regression
- On the robustness of regularized pairwise learning methods based on kernels
- Learning theory approach to minimum error entropy criterion
- Online regularized pairwise learning with least squares loss
- A minimum-error entropy criterion with self-adjusting step-size (MEE-SAS)
- Fast rates of minimum error entropy with heavy-tailed noise
- Theory of deep convolutional neural networks: downsampling
- Error analysis of kernel regularized pairwise learning with a strongly convex loss
- Theory of deep convolutional neural networks. II: Spherical analysis
- Learning under \((1 + \epsilon)\)-moment conditions
- Minimum cross-entropy analysis with entropy-type constraints
- New insights into learning with correntropy-based regression
- Online regularized pairwise learning with non-i.i.d. observations
- Unregularized online learning algorithms with general loss functions
- Approximation on variable exponent spaces by linear integral operators
- Chebyshev type inequality for stochastic Bernstein polynomials
- A Framework of Learning Through Empirical Gain Maximization
- Maximum correntropy criterion regression models with tending-to-zero scale parameters
- Optimal learning with Gaussians and correntropy loss
- Learning theory of minimum error entropy under weak moment conditions
- Deep distributed convolutional neural networks: universality
- Error bounds for learning the kernel
This page was built for publication: Consistency analysis of an empirical minimum error entropy algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q285539)