Online gradient descent learning algorithms
From MaRDI portal
Publication:1029541
DOI10.1007/S10208-006-0237-YzbMath1175.68211DBLPjournals/focm/YingP08OpenAlexW2059571389WikidataQ58759005 ScholiaQ58759005MaRDI QIDQ1029541
Yiming Ying, Massimiliano Pontil
Publication date: 13 July 2009
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10208-006-0237-y
Computational learning theory (68Q32) General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Stochastic approximation (62L20)
Related Items (35)
Online gradient descent algorithms for functional data learning ⋮ Online regularized learning with pairwise loss functions ⋮ The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary ⋮ Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent ⋮ Generalization properties of doubly stochastic learning algorithms ⋮ Nonparametric stochastic approximation with large step-sizes ⋮ Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces ⋮ Differentially private SGD with non-smooth losses ⋮ Capacity dependent analysis for functional online learning algorithms ⋮ Federated learning for minimizing nonsmooth convex loss functions ⋮ Unnamed Item ⋮ On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems ⋮ Online regularized learning algorithm for functional data ⋮ Convergence analysis of online learning algorithm with two-stage step size ⋮ Online Pairwise Learning Algorithms ⋮ Analysis of Online Composite Mirror Descent Algorithm ⋮ LQG Online Learning ⋮ Online minimum error entropy algorithm with unbounded sampling ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Unregularized online learning algorithms with general loss functions ⋮ On the regularizing property of stochastic gradient descent ⋮ Unnamed Item ⋮ Convergence of online mirror descent ⋮ Unregularized online algorithms with varying Gaussians ⋮ Sparse online regression algorithm with insensitive loss functions ⋮ Differentially private SGD with random features ⋮ Regret analysis of an online majorized semi-proximal ADMM for online composite optimization ⋮ Optimality of robust online learning ⋮ Online regularized pairwise learning with least squares loss ⋮ Fast and strong convergence of online learning algorithms ⋮ Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression ⋮ Optimal Rates for Multi-pass Stochastic Gradient Methods ⋮ Unnamed Item ⋮ A sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoids ⋮ An analysis of stochastic variance reduced gradient for linear inverse problems *
This page was built for publication: Online gradient descent learning algorithms