Learning theory approach to minimum error entropy criterion
From MaRDI portal
Recommendations
- Regularization schemes for minimum error entropy principle
- Consistency analysis of an empirical minimum error entropy algorithm
- On the smoothed minimum error entropy criterion
- Learning theory of minimum error entropy under weak moment conditions
- Fast rates of minimum error entropy with heavy-tailed noise
Cited in
(42)- Kernel-based sparse regression with the correntropy-induced loss
- Learning with correntropy-induced losses for regression with mixture of symmetric stable noise
- Extreme learning machine for ranking: generalization analysis and applications
- Distributed robust regression with correntropy losses and regularization kernel networks
- Kernel gradient descent algorithm for information theoretic learning
- \(\Delta \)-entropy: definition, properties and applications in system identification with quantized data
- Consistency analysis of an empirical minimum error entropy algorithm
- Online minimum error entropy algorithm with unbounded sampling
- Robust kernel-based distribution regression
- Statistical analysis of the moving least-squares method with unbounded sampling
- On reproducing kernel and density problems
- On the smoothed minimum error entropy criterion
- Error analysis on regularized regression based on the maximum correntropy criterion
- Distributed regression learning with coefficient regularization
- On extension theorems and their connection to universal consistency in machine learning
- On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization
- Distributed minimum error entropy algorithms
- On the robustness of regularized pairwise learning methods based on kernels
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Block coordinate type methods for optimization and learning
- A Statistical Learning Approach to Modal Regression
- The MEE principle in data classification: a perceptron-based analysis
- Learning rate of magnitude-preserving regularization ranking with dependent samples
- Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions
- Refined generalization bounds of gradient learning over reproducing kernel Hilbert spaces
- Stability and optimization error of stochastic gradient descent for pairwise learning
- Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity
- Fast rates of minimum error entropy with heavy-tailed noise
- Learning rates for regularized least squares ranking algorithm
- Mixture quantized error entropy for recursive least squares adaptive filtering
- The performance of semi-supervised Laplacian regularized regression with the least square loss
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- Learning under \((1 + \epsilon)\)-moment conditions
- New insights into learning with correntropy-based regression
- Unregularized online learning algorithms with general loss functions
- A Framework of Learning Through Empirical Gain Maximization
- A metric entropy bound is not sufficient for learnability
- Learning theory of minimum error entropy under weak moment conditions
- Optimal learning with Gaussians and correntropy loss
- Robust pairwise learning with Huber loss
- Error bounds for learning the kernel
- Minimum error entropy classification.
This page was built for publication: Learning theory approach to minimum error entropy criterion
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5405253)