Learning theory of minimum error entropy under weak moment conditions
From MaRDI portal
Publication:5037873
Recommendations
Cites work
- scientific article; zbMATH DE number 3954047 (Why is no real title available?)
- A Statistical Learning Approach to Modal Regression
- A statistical learning assessment of Huber regression
- Approximation theorems of mathematical statistics
- Blind source separation using Renyi's -marginal entropies.
- Consistency analysis of an empirical minimum error entropy algorithm
- Convergence of Gradient Descent for Minimum Error Entropy Principle in Linear Regression
- Convexity, Classification, and Risk Bounds
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Distributed minimum error entropy algorithms
- Empirical minimization
- Fast rates in statistical and online learning
- Information theoretic learning. Renyi's entropy and kernel perspectives
- Kernel gradient descent algorithm for information theoretic learning
- Learning Theory
- Learning rates for regularized least squares ranking algorithm
- Learning theory approach to minimum error entropy criterion
- Learning under \((1 + \epsilon)\)-moment conditions
- Minimum Total Error Entropy Method for Parameter Estimation
- New insights into learning with correntropy-based regression
- Online regularized pairwise learning with least squares loss
- Optimal learning with Gaussians and correntropy loss
- Probability Inequalities for Sums of Bounded Random Variables
- Regularization schemes for minimum error entropy principle
- Robust Statistics
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Support Vector Machines
- The MEE principle in data classification: a perceptron-based analysis
- The convergence rate of a regularized ranking algorithm
Cited in
(23)- Linear combinations of two Bernstein polynomials
- On the convergence of gradient descent for robust functional linear regression
- Rates of approximation by ReLU shallow neural networks
- Online minimum error entropy algorithm with unbounded sampling
- On choosing initial values of iteratively reweighted \(\ell_1\) algorithms for the piece-wise exponential penalty
- On approximation of unbounded functions by certain modified Bernstein operators
- Some further results on the minimum error entropy estimation
- Compressed data separation via unconstrained l1-split analysis
- Distributed minimum error entropy algorithms
- Convergence theorems in Orlicz and Bögel continuous functions spaces by means of Kantorovich discrete type sampling operators
- Approximation by modified Bernstein polynomials based on real parameters
- Learning theory approach to minimum error entropy criterion
- On weak learning
- Some new fractional integral inequalities for \((h_1, h_2)\)-convex functions
- Optimality of robust online learning
- On wavelet type generalized Bézier operators
- Approximation properties of exponential type operators connected to \(p(x)=2x^{3/2}\)
- Error analysis of classification learning algorithms based on LUMs loss
- Some new inequalities and numerical results of bivariate Bernstein-type operator including Bézier basis and its GBS operator
- Shape preserving properties of \((\mathfrak{p},\mathfrak{q})\) Bernstein Bèzier curves and corresponding results over \([a,b]\)
- A metric entropy bound is not sufficient for learnability
- Asymptotic properties of Kantorovich-type Szász-Mirakjan operators of higher order
- Minimum error entropy classification.
This page was built for publication: Learning theory of minimum error entropy under weak moment conditions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5037873)