Distributed minimum error entropy algorithms
From MaRDI portal
Publication:4969211
Authors: Xin Guo, Ting Hu, Qiang Wu
Publication date: 5 October 2020
Full work available at URL: https://jmlr.csail.mit.edu/papers/v21/18-696.html
Recommendations
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Online minimum error entropy algorithm with unbounded sampling
- Learning theory of minimum error entropy under weak moment conditions
- Distributed semi-supervised regression learning with coefficient regularization
- Regularization schemes for minimum error entropy principle
reproducing kernel Hilbert spaceminimum error entropyinformation theoretic learningdistributed methodsemisupervised data
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Learning Theory
- Remarks on Inequalities for Large Deviation Probabilities
- On Estimation of a Probability Density Function and Mode
- Optimal rates for the regularized least-squares algorithm
- Information theoretic learning. Renyi's entropy and kernel perspectives
- Consistency analysis of an empirical minimum error entropy algorithm
- The MEE principle in data classification: a perceptron-based analysis
- Learning theory approach to minimum error entropy criterion
- Semi-supervised learning on Riemannian manifolds
- The covering number in learning theory
- Learning theory estimates via integral operators and their approximations
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Spectral Algorithms for Supervised Learning
- Cross-validation based adaptation for regularization operators in learning theory
- On regularization algorithms in learning theory
- On the robustness of regularized pairwise learning methods based on kernels
- Title not available (Why is that?)
- Regularization schemes for minimum error entropy principle
- Regularization in kernel learning
- Unregularized online learning algorithms with general loss functions
- Distributed learning with regularized least squares
- Optimal rates for regularization of statistical inverse learning problems
- Convergence rates of kernel conjugate gradient for random design regression
- Learning theory of distributed regression with bias corrected regularization kernel network
- Semi-supervised learning using greedy Max-Cut
- Distributed kernel-based gradient descent algorithms
- Learning theory of distributed spectral algorithms
- Parallelizing spectrally regularized kernel algorithms
- Online Pairwise Learning Algorithms
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Convergence of Gradient Descent for Minimum Error Entropy Principle in Linear Regression
- Online minimum error entropy algorithm with unbounded sampling
- Minimum Total Error Entropy Method for Parameter Estimation
Cited In (11)
- On the convergence of gradient descent for robust functional linear regression
- Distributed robust regression with correntropy losses and regularization kernel networks
- Online minimum error entropy algorithm with unbounded sampling
- Semi-supervised learning with summary statistics
- Large margin unified machines with non-i.i.d. process
- Multilevel aggregation of central server models: a minimum relative entropy approach
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Averaging versus voting: a comparative study of strategies for distributed classification
- Pairwise learning problems with regularization networks and Nyström subsampling approach
- A Framework of Learning Through Empirical Gain Maximization
- Learning theory of minimum error entropy under weak moment conditions
This page was built for publication: Distributed minimum error entropy algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4969211)