Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
From MaRDI portal
Publication:2986183
DOI10.1109/TIT.2014.2332531zbMATH Open1360.62192OpenAlexW2087789467MaRDI QIDQ2986183FDOQ2986183
Publication date: 16 May 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tit.2014.2332531
Nonparametric regression and quantile regression (62G08) Learning and adaptive systems in artificial intelligence (68T05) Stochastic approximation (62L20)
Cited In (28)
- An analysis of stochastic variance reduced gradient for linear inverse problems *
- Title not available (Why is that?)
- A loss bound model for on-line stochastic prediction algorithms
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- On the regularizing property of stochastic gradient descent
- Stochastic subspace correction in Hilbert space
- Fast and strong convergence of online learning algorithms
- Analysis of Online Composite Mirror Descent Algorithm
- A sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoids
- Online Proximal Learning Over Jointly Sparse Multitask Networks With $\ell _{\infty, 1}$ Regularization
- Concentration bounds for temporal difference learning with linear function approximation: the case of batch data and uniform sampling
- Online local learning via semidefinite programming
- Generalization properties of doubly stochastic learning algorithms
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Online regularized learning algorithm for functional data
- Convergence analysis of online learning algorithm with two-stage step size
- Learning Theory of Randomized Sparse Kaczmarz Method
- Online Pairwise Learning Algorithms
- Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent
- Nonparametric stochastic approximation with large step-sizes
- Unregularized online learning algorithms with general loss functions
- Sparse online regression algorithm with insensitive loss functions
- Title not available (Why is that?)
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- Title not available (Why is that?)
- An Online Projection Estimator for Nonparametric Regression in Reproducing Kernel Hilbert Spaces
This page was built for publication: Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2986183)