Distributed kernel gradient descent algorithm for minimum error entropy principle
From MaRDI portal
Publication:2175022
DOI10.1016/j.acha.2019.01.002zbMath1434.68416OpenAlexW2911067482WikidataQ128593977 ScholiaQ128593977MaRDI QIDQ2175022
Qiang Wu, Ting Hu, Ding-Xuan Zhou
Publication date: 27 April 2020
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2019.01.002
Learning and adaptive systems in artificial intelligence (68T05) Measures of information, entropy (94A17) Computational aspects of data analysis and big data (68T09)
Related Items
Learning theory of minimum error entropy under weak moment conditions, Block coordinate type methods for optimization and learning, Distributed spectral pairwise ranking algorithms, Unnamed Item, Unnamed Item, Fast rates of minimum error entropy with heavy-tailed noise, Averaging versus voting: a comparative study of strategies for distributed classification, Infinite-dimensional stochastic transforms and reproducing kernel Hilbert space, Kernel-based maximum correntropy criterion with gradient descent method, Kernel gradient descent algorithm for information theoretic learning, Robust kernel-based distribution regression, Distributed regularized least squares with flexible Gaussian kernels, Convergence analysis of distributed multi-penalty regularized pairwise learning, A Framework of Learning Through Empirical Gain Maximization, Unnamed Item
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Consistency analysis of an empirical minimum error entropy algorithm
- Unregularized online learning algorithms with general loss functions
- Stochastic gradient algorithm under \((h, \phi)\)-entropy criterion
- Distributed kernel-based gradient descent algorithms
- Blind source separation using Renyi's \(\alpha\)-marginal entropies.
- Optimum bounds for the distributions of martingales in Banach spaces
- Deep relaxation: partial differential equations for optimizing deep neural networks
- Optimal rates for the regularized least-squares algorithm
- On early stopping in gradient descent learning
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- The MEE Principle in Data Classification: A Perceptron-Based Analysis
- Learning Theory
- Minimum Total Error Entropy Method for Parameter Estimation
- On the optimality of averaging in distributed statistical learning
- Convergence of Gradient Descent for Minimum Error Entropy Principle in Linear Regression
- Regularization schemes for minimum error entropy principle
- Thresholded spectral algorithms for sparse approximations
- Learning theory of distributed spectral algorithms
- Online Pairwise Learning Algorithms