An analysis of stochastic variance reduced gradient for linear inverse problems *
From MaRDI portal
Publication:5019935
DOI10.1088/1361-6420/AC4428zbMATH Open1480.65094arXiv2108.04429OpenAlexW3192167672MaRDI QIDQ5019935FDOQ5019935
Authors: Bangti Jin, Zehui Zhou, Jun Zou
Publication date: 11 January 2022
Published in: Inverse Problems (Search for Journal in Brave)
Abstract: Stochastic variance reduced gradient (SVRG) is a popular variance reduction technique for accelerating stochastic gradient descent (SGD). We provide a first analysis of the method for solving a class of linear inverse problems in the lens of the classical regularization theory. We prove that for a suitable constant step size schedule, the method can achieve an optimal convergence rate in terms of the noise level (under suitable regularity condition) and the variance of the SVRG iterate error is smaller than that by SGD. These theoretical findings are corroborated by a set of numerical experiments.
Full work available at URL: https://arxiv.org/abs/2108.04429
Recommendations
- Stochastic gradient descent for linear inverse problems in Hilbert spaces
- Stochastic asymptotical regularization for linear inverse problems
- On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces
- A new regularized stochastic approximation framework for stochastic inverse problems
- A variational inequality based stochastic approximation for inverse problems in stochastic partial differential equations
- Stochastic variance reduced gradient methods using a trust-region-like scheme
- Stochastic inverse matrix computation with minimum variance of errors
- Stochastic reduced order models for inverse problems under uncertainty
- Stochastic variance-reduced cubic regularization methods
Cites Work
- Regularization tools version \(4.0\) for matlab \(7.3\)
- A randomized Kaczmarz algorithm with exponential convergence
- Nonparametric stochastic approximation with large step-sizes
- A Stochastic Approximation Method
- Title not available (Why is that?)
- Iterative regularization methods for nonlinear ill-posed problems
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- Online gradient descent learning algorithms
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Relaxation methods for image reconstruction
- Optimization methods for large-scale machine learning
- On Nesterov acceleration for Landweber iteration of linear ill-posed problems
- Optimal rates for multi-pass stochastic gradient methods
- Inverse problems. Tikhonov theory and algorithms
- On the regularizing property of stochastic gradient descent
- Optimal-order convergence of Nesterov acceleration for linear ill-posed problems
- Online learning in optical tomography: a stochastic approach
- Stochastic EM methods with variance reduction for penalised PET reconstructions
- On the discrepancy principle for stochastic gradient descent
Cited In (12)
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- Stochastic variance reduced gradient for affine rank minimization problem
- Improved SVRG for finite sum structure optimization with application to binary classification
- Stochastic variance reduced gradient methods using a trust-region-like scheme
- A stochastic variance reduced primal dual fixed point method for linearly constrained separable optimization
- Asymptotic estimates for \(r\)-Whitney numbers of the second kind
- Variance comparison between infinitesimal perturbation analysis and likelihood ratio estimators to stochastic gradient
- Cocoercivity, smoothness and bias in variance-reduced stochastic gradient methods
- On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces
- Stochastic gradient descent for linear inverse problems in Hilbert spaces
- Analysis and improvement for a class of variance reduced methods
- A stochastic gradient descent approach with partitioned-truncated singular value decomposition for large-scale inverse problems of magnetic modulus data
Uses Software
This page was built for publication: An analysis of stochastic variance reduced gradient for linear inverse problems *
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5019935)