Pages that link to "Item:Q4637017"
From MaRDI portal
The following pages link to Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression (Q4637017):
Displayed 14 items.
- Generalization properties of doubly stochastic learning algorithms (Q1635837) (← links)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise (Q1739038) (← links)
- Finite impulse response models: a non-asymptotic analysis of the least squares estimator (Q2040046) (← links)
- Concentration bounds for temporal difference learning with linear function approximation: the case of batch data and uniform sampling (Q2051259) (← links)
- Dimension independent excess risk by stochastic gradient descent (Q2084455) (← links)
- From inexact optimization to learning via gradient concentration (Q2111477) (← links)
- Bridging the gap between constant step size stochastic gradient descent and Markov chains (Q2196224) (← links)
- Some Limit Properties of Markov Chains Induced by Recursive Stochastic Algorithms (Q5037552) (← links)
- Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization (Q5076671) (← links)
- On the rates of convergence of parallelized averaged stochastic gradient algorithms (Q5110810) (← links)
- On the Adaptivity of Stochastic Gradient-Based Optimization (Q5114394) (← links)
- (Q5149264) (← links)
- (Q5159408) (← links)
- Dual Space Preconditioning for Gradient Descent (Q5857297) (← links)