Online gradient descent learning algorithms

From MaRDI portal
Revision as of 23:19, 30 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1029541

DOI10.1007/S10208-006-0237-YzbMath1175.68211DBLPjournals/focm/YingP08OpenAlexW2059571389WikidataQ58759005 ScholiaQ58759005MaRDI QIDQ1029541

Yiming Ying, Massimiliano Pontil

Publication date: 13 July 2009

Published in: Foundations of Computational Mathematics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10208-006-0237-y






Related Items (35)

Online gradient descent algorithms for functional data learningOnline regularized learning with pairwise loss functionsThe kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundaryGraph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient DescentGeneralization properties of doubly stochastic learning algorithmsNonparametric stochastic approximation with large step-sizesRates of convergence of randomized Kaczmarz algorithms in Hilbert spacesDifferentially private SGD with non-smooth lossesCapacity dependent analysis for functional online learning algorithmsFederated learning for minimizing nonsmooth convex loss functionsUnnamed ItemOn the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed ProblemsOnline regularized learning algorithm for functional dataConvergence analysis of online learning algorithm with two-stage step sizeOnline Pairwise Learning AlgorithmsAnalysis of Online Composite Mirror Descent AlgorithmLQG Online LearningOnline minimum error entropy algorithm with unbounded samplingKernel gradient descent algorithm for information theoretic learningUnregularized online learning algorithms with general loss functionsOn the regularizing property of stochastic gradient descentUnnamed ItemConvergence of online mirror descentUnregularized online algorithms with varying GaussiansSparse online regression algorithm with insensitive loss functionsDifferentially private SGD with random featuresRegret analysis of an online majorized semi-proximal ADMM for online composite optimizationOptimality of robust online learningOnline regularized pairwise learning with least squares lossFast and strong convergence of online learning algorithmsHarder, Better, Faster, Stronger Convergence Rates for Least-Squares RegressionOptimal Rates for Multi-pass Stochastic Gradient MethodsUnnamed ItemA sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoidsAn analysis of stochastic variance reduced gradient for linear inverse problems *





This page was built for publication: Online gradient descent learning algorithms