Consistency analysis of an empirical minimum error entropy algorithm
From MaRDI portal
Abstract: In this paper we study the consistency of an empirical minimum error entropy (MEE) algorithm in a regression setting. We introduce two types of consistency. The error entropy consistency, which requires the error entropy of the learned function to approximate the minimum error entropy, is shown to be always true if the bandwidth parameter tends to 0 at an appropriate rate. The regression consistency, which requires the learned function to approximate the regression function, however, is a complicated issue. We prove that the error entropy consistency implies the regression consistency for homoskedastic models where the noise is independent of the input variable. But for heteroskedastic models, a counterexample is used to show that the two types of consistency do not coincide. A surprising result is that the regression consistency is always true, provided that the bandwidth parameter tends to infinity at an appropriate rate. Regression consistency of two classes of special models is shown to hold with fixed bandwidth parameter, which further illustrates the complexity of regression consistency of MEE. Fourier transform plays crucial roles in our analysis.
Recommendations
- Regularization schemes for minimum error entropy principle
- Learning theory approach to minimum error entropy criterion
- Fast rates of minimum error entropy with heavy-tailed noise
- On optimal estimations with minimum error entropy criterion
- Online minimum error entropy algorithm with unbounded sampling
Cites work
- scientific article; zbMATH DE number 18437 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- 10.1162/153244303321897690
- Blind source separation using Renyi's -marginal entropies.
- Information theoretic learning. Renyi's entropy and kernel perspectives
- Learning theory approach to minimum error entropy criterion
- Local Rademacher complexities
- On a Class of Unimodal Distributions
- Tails of Lévy measure of geometric stable random variables
- The MEE principle in data classification: a perceptron-based analysis
Cited in
(41)- Kernel-based sparse regression with the correntropy-induced loss
- Learning with correntropy-induced losses for regression with mixture of symmetric stable noise
- Universality of deep convolutional neural networks
- Online pairwise learning algorithms with convex loss functions
- Online regularized learning with pairwise loss functions
- Learning theory of randomized sparse Kaczmarz method
- On the convergence of gradient descent for robust functional linear regression
- Kernel gradient descent algorithm for information theoretic learning
- On meshfree numerical differentiation
- Supersmooth density estimations over \(L^p\) risk by wavelets
- Online minimum error entropy algorithm with unbounded sampling
- Robust kernel-based distribution regression
- Distributed learning with regularized least squares
- Error analysis on regularized regression based on the maximum correntropy criterion
- On extension theorems and their connection to universal consistency in machine learning
- Distributed minimum error entropy algorithms
- Theory of deep convolutional neural networks. III: Approximating radial functions
- On the robustness of regularized pairwise learning methods based on kernels
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Toward recursive spherical harmonics issued bi-filters. II: An associated spherical harmonics entropy for optimal modeling
- A Statistical Learning Approach to Modal Regression
- Learning theory approach to minimum error entropy criterion
- A minimum-error entropy criterion with self-adjusting step-size (MEE-SAS)
- Online regularized pairwise learning with least squares loss
- Fast rates of minimum error entropy with heavy-tailed noise
- Theory of deep convolutional neural networks: downsampling
- Error analysis of kernel regularized pairwise learning with a strongly convex loss
- Learning under \((1 + \epsilon)\)-moment conditions
- Minimum cross-entropy analysis with entropy-type constraints
- Theory of deep convolutional neural networks. II: Spherical analysis
- New insights into learning with correntropy-based regression
- Unregularized online learning algorithms with general loss functions
- Approximation on variable exponent spaces by linear integral operators
- Online regularized pairwise learning with non-i.i.d. observations
- Chebyshev type inequality for stochastic Bernstein polynomials
- A Framework of Learning Through Empirical Gain Maximization
- Maximum correntropy criterion regression models with tending-to-zero scale parameters
- Learning theory of minimum error entropy under weak moment conditions
- Deep distributed convolutional neural networks: universality
- Optimal learning with Gaussians and correntropy loss
- Error bounds for learning the kernel
This page was built for publication: Consistency analysis of an empirical minimum error entropy algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q285539)