Optimality of robust online learning
From MaRDI portal
Publication:6645952
DOI10.1007/S10208-023-09616-9MaRDI QIDQ6645952FDOQ6645952
Authors: Zheng-Chu Guo, Andreas Christmann, Lei Shi
Publication date: 29 November 2024
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Recommendations
General nonlinear regression (62J02) Linear regression; mixed models (62J05) Learning and adaptive systems in artificial intelligence (68T05) Online algorithms; streaming algorithms (68W27)
Cites Work
- Theory of Reproducing Kernels
- Robust Statistics
- Nonparametric stochastic approximation with large step-sizes
- Learning Theory
- Support Vector Machines
- Title not available (Why is that?)
- Correntropy: Properties and Applications in Non-Gaussian Signal Processing
- Robust Statistics
- Consistency and robustness of kernel-based regression in convex risk minimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Robust regression using iteratively reweighted least-squares
- Optimal rates for the regularized least-squares algorithm
- Learning theory estimates via integral operators and their approximations
- Online gradient descent learning algorithms
- Adaptive kernel methods using the balancing principle
- Early stopping and non-parametric regression: an optimal data-dependent stopping rule
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- On regularization algorithms in learning theory
- On consistency and robustness properties of support vector machines for heavy-tailed distributions
- Learning with the maximum correntropy criterion induced losses for regression
- ONLINE LEARNING WITH MARKOV SAMPLING
- On Complexity Issues of Online Learning Algorithms
- Robustness of reweighted least squares kernel based regression
- How to compare different loss functions and their risks
- Unregularized online learning algorithms with general loss functions
- Generalized correlation function: definition, properties, and application to blind equalization
- Optimal rates for regularization of statistical inverse learning problems
- Optimization methods for large-scale machine learning
- Breakdown points of Cauchy regression-scale estimators
- Learning theory of distributed spectral algorithms
- Balancing principle in supervised learning for a general regularization scheme
- Learning theory of minimum error entropy under weak moment conditions
- Optimal learning with Gaussians and correntropy loss
- A Framework of Learning Through Empirical Gain Maximization
- Gradient descent for robust kernel-based regression
- Capacity dependent analysis for functional online learning algorithms
This page was built for publication: Optimality of robust online learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6645952)