On the asymptotic rate of convergence of stochastic Newton algorithms and their weighted averaged versions
From MaRDI portal
Publication:2696929
DOI10.1007/s10589-022-00442-3OpenAlexW4313291849MaRDI QIDQ2696929
Claire Boyer, Antoine Godichon-Baggioni
Publication date: 17 April 2023
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2011.09706
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- On the almost sure asymptotic behaviour of stochastic algorithm
- Online estimation of the asymptotic variance for averaged stochastic gradient algorithms
- A Generalization of the Averaging Procedure: The Use of Two-Time-Scale Algorithms
- Acceleration of Stochastic Approximation by Averaging
- Asymptotic Almost Sure Efficiency of Averaged Stochastic Algorithms
- Optimization Methods for Large-Scale Machine Learning
- An Efficient Stochastic Newton Algorithm for Parameter Estimation in Logistic Regressions
- Lp and almost sure rates of convergence of averaged stochastic gradient algorithms: locally strongly convex objective
This page was built for publication: On the asymptotic rate of convergence of stochastic Newton algorithms and their weighted averaged versions