On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces

From MaRDI portal
Publication:6173540

DOI10.1137/22M1518542zbMATH Open1518.65053arXiv2302.05197OpenAlexW4376288907MaRDI QIDQ6173540FDOQ6173540


Authors: Bangti Jin, Z. Kereta Edit this on Wikidata


Publication date: 21 July 2023

Published in: SIAM Journal on Imaging Sciences (Search for Journal in Brave)

Abstract: In this work we consider stochastic gradient descent (SGD) for solving linear inverse problems in Banach spaces. SGD and its variants have been established as one of the most successful optimisation methods in machine learning, imaging and signal processing, etc. At each iteration SGD uses a single datum, or a small subset of data, resulting in highly scalable methods that are very attractive for large-scale inverse problems. Nonetheless, the theoretical analysis of SGD-based approaches for inverse problems has thus far been largely limited to Euclidean and Hilbert spaces. In this work we present a novel convergence analysis of SGD for linear inverse problems in general Banach spaces: we show the almost sure convergence of the iterates to the minimum norm solution and establish the regularising property for suitable a priori stopping criteria. Numerical results are also presented to illustrate features of the approach.


Full work available at URL: https://arxiv.org/abs/2302.05197




Recommendations




Cites Work


Cited In (9)





This page was built for publication: On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6173540)