Unregularized online learning algorithms with general loss functions
From MaRDI portal
Publication:504379
DOI10.1016/j.acha.2015.08.007zbMath1382.68204arXiv1503.00623OpenAlexW2964224116WikidataQ58758915 ScholiaQ58758915MaRDI QIDQ504379
Publication date: 16 January 2017
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1503.00623
Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Related Items (25)
Deep distributed convolutional neural networks: Universality ⋮ Unnamed Item ⋮ Distributed kernel gradient descent algorithm for minimum error entropy principle ⋮ Differentially private SGD with non-smooth losses ⋮ Theory of deep convolutional neural networks: downsampling ⋮ Convergence of online pairwise regression learning with quadratic loss ⋮ Convergence analysis for kernel-regularized online regression associated with an RRKHS ⋮ Federated learning for minimizing nonsmooth convex loss functions ⋮ High-probability generalization bounds for pointwise uniformly stable algorithms ⋮ Online Pairwise Learning Algorithms ⋮ Analysis of Online Composite Mirror Descent Algorithm ⋮ Online minimum error entropy algorithm with unbounded sampling ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Generalization ability of online pairwise support vector machine ⋮ Unnamed Item ⋮ Convergence of online mirror descent ⋮ Online pairwise learning algorithms with convex loss functions ⋮ Online regularized pairwise learning with least squares loss ⋮ Fast and strong convergence of online learning algorithms ⋮ Deep neural networks for rotation-invariance approximation and learning ⋮ Analysis of regularized Nyström subsampling for regression functions of low smoothness ⋮ Analysis of singular value thresholding algorithm for matrix completion ⋮ Iterative gradient descent for outlier detection ⋮ Error analysis of the kernel regularized regression based on refined convex losses and RKBSs ⋮ Error analysis of the moving least-squares regression learning algorithm with β-mixing and non-identical sampling
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Consistency analysis of an empirical minimum error entropy algorithm
- Learning the coordinate gradients
- Model selection for regularized least-squares algorithm in learning theory
- Multi-kernel regularized classifiers
- Online gradient descent learning algorithms
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Fully online classification by regularization
- Ranking and empirical minimization of \(U\)-statistics
- Online learning algorithms
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Learning Theory
- Support Vector Machines
- On the Generalization Ability of On-Line Learning Algorithms
- Online Regularized Classification Algorithms
- Some comparisons for Gaussian processes
- 10.1162/1532443041424300
- 10.1162/153244303321897690
- Regularization schemes for minimum error entropy principle
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
This page was built for publication: Unregularized online learning algorithms with general loss functions