An analysis of stochastic variance reduced gradient for linear inverse problems *
From MaRDI portal
Publication:5019935
DOI10.1088/1361-6420/ac4428zbMath1480.65094arXiv2108.04429OpenAlexW3192167672MaRDI QIDQ5019935
Jun Zou, Bangti Jin, Zehui Zhou
Publication date: 11 January 2022
Published in: Inverse Problems (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2108.04429
Uses Software
Cites Work
- Unnamed Item
- Nonparametric stochastic approximation with large step-sizes
- Iterative regularization methods for nonlinear ill-posed problems
- A randomized Kaczmarz algorithm with exponential convergence
- Online gradient descent learning algorithms
- On Nesterov acceleration for Landweber iteration of linear ill-posed problems
- Regularization tools version \(4.0\) for matlab \(7.3\)
- Inverse Problems
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Relaxation methods for image reconstruction
- Online learning in optical tomography: a stochastic approach
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- Optimization Methods for Large-Scale Machine Learning
- On the regularizing property of stochastic gradient descent
- Optimal-order convergence of Nesterov acceleration for linear ill-posed problems*
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- On the discrepancy principle for stochastic gradient descent
- A Stochastic Approximation Method
- Stochastic EM methods with variance reduction for penalised PET reconstructions
This page was built for publication: An analysis of stochastic variance reduced gradient for linear inverse problems *