Regularization schemes for minimum error entropy principle
From MaRDI portal
Publication:5253870
DOI10.1142/S0219530514500110zbMath1329.68216OpenAlexW2025199224MaRDI QIDQ5253870
Qiang Wu, Ding-Xuan Zhou, Ting Hu, Jun Fan
Publication date: 5 June 2015
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530514500110
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46)
Related Items (44)
On reproducing kernel and density problems ⋮ Online regularized learning with pairwise loss functions ⋮ Learning theory of minimum error entropy under weak moment conditions ⋮ Block coordinate type methods for optimization and learning ⋮ On the robustness of regularized pairwise learning methods based on kernels ⋮ Optimal learning rates for kernel partial least squares ⋮ Kernel-based conditional canonical correlation analysis via modified Tikhonov regularization ⋮ Unnamed Item ⋮ Fast rates of minimum error entropy with heavy-tailed noise ⋮ Approximation on variable exponent spaces by linear integral operators ⋮ Distributed kernel gradient descent algorithm for minimum error entropy principle ⋮ Kernel-based sparse regression with the correntropy-induced loss ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Distributed learning with multi-penalty regularization ⋮ Learning theory of distributed spectral algorithms ⋮ Necessary and sufficient optimality conditions for some robust variational problems ⋮ Rates of approximation by ReLU shallow neural networks ⋮ Learning theory approach to a system identification problem involving atomic norm ⋮ On the K-functional in learning theory ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss ⋮ Supersmooth density estimations over \(L^p\) risk by wavelets ⋮ On meshfree numerical differentiation ⋮ Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems ⋮ The performance of semi-supervised Laplacian regularized regression with the least square loss ⋮ Online Pairwise Learning Algorithms ⋮ Analysis of Online Composite Mirror Descent Algorithm ⋮ Online minimum error entropy algorithm with unbounded sampling ⋮ Analysis of approximation by linear operators on variable \(L_\rho^{p(\cdot)}\) spaces and applications in learning theory ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Unregularized online learning algorithms with general loss functions ⋮ Distributed kernel-based gradient descent algorithms ⋮ Convergence of online mirror descent ⋮ Learning Theory of Randomized Sparse Kaczmarz Method ⋮ Online pairwise learning algorithms with convex loss functions ⋮ Online regularized pairwise learning with least squares loss ⋮ Convergence analysis of distributed multi-penalty regularized pairwise learning ⋮ On extension theorems and their connection to universal consistency in machine learning ⋮ Learning rates of regression with q-norm loss and threshold ⋮ Error bounds for learning the kernel ⋮ Semi-supervised learning with summary statistics ⋮ Optimal learning with Gaussians and correntropy loss ⋮ Unnamed Item ⋮ Thresholded spectral algorithms for sparse approximations
Cites Work
- Unnamed Item
- Multi-kernel regularized classifiers
- Blind source separation using Renyi's \(\alpha\)-marginal entropies.
- Learning rates of least-square regularized regression
- The MEE Principle in Data Classification: A Perceptron-Based Analysis
- ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS
- Information Theoretic Learning
This page was built for publication: Regularization schemes for minimum error entropy principle