Nonparametric stochastic approximation with large step-sizes

From MaRDI portal
Publication:309706

DOI10.1214/15-AOS1391zbMath1346.60041arXiv1408.0361OpenAlexW2964198904MaRDI QIDQ309706

Aymeric Dieuleveut, Francis Bach

Publication date: 7 September 2016

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1408.0361



Related Items

Stochastic subspace correction in Hilbert space, Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent, Nonparametric stochastic approximation with large step-sizes, Complexity Analysis of stochastic gradient methods for PDE-constrained optimal Control Problems with uncertain parameters, Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces, Approximate maximum likelihood estimation for population genetic inference, Differentially private SGD with non-smooth losses, An Online Projection Estimator for Nonparametric Regression in Reproducing Kernel Hilbert Spaces, Capacity dependent analysis for functional online learning algorithms, Convergence rates of gradient methods for convex optimization in the space of measures, Unnamed Item, On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems, Bridging the gap between constant step size stochastic gradient descent and Markov chains, Biparametric identification for a free boundary of ductal carcinoma in situ, Online regularized learning algorithm for functional data, Uncertainty Quantification for Stochastic Approximation Limits Using Chaos Expansion, Consistent change-point detection with kernels, On the regularizing property of stochastic gradient descent, A Kernel Multiple Change-point Algorithm via Model Selection, Unnamed Item, Unnamed Item, Ensemble Kalman inversion: a derivative-free technique for machine learning tasks, Unregularized online algorithms with varying Gaussians, New efficient algorithms for multiple change-point detection with reproducing kernels, A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares), Fast and strong convergence of online learning algorithms, Stochastic subspace correction methods and fault tolerance, Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression, Optimal Rates for Multi-pass Stochastic Gradient Methods, An elementary analysis of ridge regression with random design, Unnamed Item, Dimension independent excess risk by stochastic gradient descent, Unnamed Item, Optimal indirect estimation for linear inverse problems with discretely sampled functional data, A sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoids, An analysis of stochastic variance reduced gradient for linear inverse problems *, From inexact optimization to learning via gradient concentration, Distribution-free robust linear regression, Regularization: From Inverse Problems to Large-Scale Machine Learning


Uses Software


Cites Work