On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces
From MaRDI portal
Publication:6173540
Abstract: In this work we consider stochastic gradient descent (SGD) for solving linear inverse problems in Banach spaces. SGD and its variants have been established as one of the most successful optimisation methods in machine learning, imaging and signal processing, etc. At each iteration SGD uses a single datum, or a small subset of data, resulting in highly scalable methods that are very attractive for large-scale inverse problems. Nonetheless, the theoretical analysis of SGD-based approaches for inverse problems has thus far been largely limited to Euclidean and Hilbert spaces. In this work we present a novel convergence analysis of SGD for linear inverse problems in general Banach spaces: we show the almost sure convergence of the iterates to the minimum norm solution and establish the regularising property for suitable a priori stopping criteria. Numerical results are also presented to illustrate features of the approach.
Recommendations
- On the regularizing property of stochastic gradient descent
- Stochastic gradient descent for linear inverse problems in Hilbert spaces
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- An analysis of stochastic variance reduced gradient for linear inverse problems *
- On the convergence of stochastic primal-dual hybrid gradient
Cites work
- scientific article; zbMATH DE number 4164577 (Why is no real title available?)
- scientific article; zbMATH DE number 48239 (Why is no real title available?)
- scientific article; zbMATH DE number 1972910 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- A Stochastic Approximation Method
- A fast subspace optimization method for nonlinear inverse problems in Banach spaces with an application in parameter identification
- A semismooth Newton method for \(\mathrm{L}^1\) data fitting with automatic choice of regularization parameters and noise calibration
- An Iteration Formula for Fredholm Integral Equations of the First Kind
- Conditional stability estimates for ill-posed PDE problems by using interpolation
- Convergence rates for the iteratively regularized Gauss-Newton method in Banach spaces
- Convex optimization in sums of Banach spaces
- Elastic-net regularization: error estimates and active set methods
- Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems
- Fast regularizing sequential subspace optimization in Banach spaces
- Fast subspace optimization method for nonlinear inverse problems in Banach spaces with uniformly convex penalty terms
- Geometric properties of Banach spaces and nonlinear iterations
- Inexact Newton regularization combined with gradient methods in Banach spaces
- Inverse problems for partial differential equations
- Iterative regularization methods for nonlinear ill-posed problems
- Linear convergence of the randomized sparse Kaczmarz method
- Local analysis of inverse problems: Hölder stability and iterative reconstruction
- Measure theory. Vol. I and II
- Nonlinear iterative methods for linear ill-posed problems in Banach spaces
- Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces
- On stochastic Kaczmarz type methods for solving large scale systems of ill-posed equations
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- On the degree of ill-posedness for nonlinear problems
- On the discrepancy principle for stochastic gradient descent
- On the regularizing property of stochastic gradient descent
- One new strategy for a priori choice of regularizing parameters in Tikhonov's regularization
- Online learning in optical tomography: a stochastic approach
- Optimization methods for large-scale machine learning
- Randomized block Kaczmarz method with projection for solving least squares
- Randomized extended Kaczmarz for solving least squares
- Reconstructing the thermal phonon transmission coefficient at solid interfaces in the phonon transport equation
- Regularization and Variable Selection Via the Elastic Net
- Regularization methods in Banach spaces.
- Regularization of inverse problems by two-point gradient methods in Banach spaces
- Stable signal recovery from incomplete and inaccurate measurements
- Stochastic EM methods with variance reduction for penalised PET reconstructions
- Stochastic gradient descent for linear inverse problems in Hilbert spaces
- Stochastic mirror descent method for linear ill-posed problems in Banach spaces
- The index function and Tikhonov regularization for ill-posed problems
- The mathematics of computerized tomography
- The power method for l\(^p\) norms
- Tikhonov regularization in Hilbert scales under conditional stability assumptions
- Variational methods in imaging
- Variational source conditions and conditional stability estimates for inverse problems in PDEs
- Variational source conditions and stability estimates for inverse electromagnetic medium scattering problems
- Variational source conditions in \(L^p\)-spaces
Cited in
(9)- An analysis of stochastic variance reduced gradient for linear inverse problems *
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- On the regularizing property of stochastic gradient descent
- On the convergence rate of projected gradient descent for a back-projection based objective
- A novel two-point Landweber-type method for regularization of non-smooth inverse problems in Banach spaces
- Online learning in optical tomography: a stochastic approach
- Sublinear convergence of a tamed stochastic gradient descent method in Hilbert space
- On the regularization effect of stochastic gradient descent applied to least-squares
- Stochastic gradient descent for linear inverse problems in Hilbert spaces
This page was built for publication: On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6173540)