On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces
DOI10.1137/22M1518542zbMATH Open1518.65053arXiv2302.05197OpenAlexW4376288907MaRDI QIDQ6173540FDOQ6173540
Authors: Bangti Jin, Z. Kereta
Publication date: 21 July 2023
Published in: SIAM Journal on Imaging Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2302.05197
Recommendations
- On the regularizing property of stochastic gradient descent
- Stochastic gradient descent for linear inverse problems in Hilbert spaces
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- An analysis of stochastic variance reduced gradient for linear inverse problems *
- On the convergence of stochastic primal-dual hybrid gradient
almost sure convergenceconvergence ratelinear inverse problemsBanach spacesstochastic gradient descentregularizing property
Inverse problems for PDEs (35R30) Numerical solutions to equations with nonlinear operators (65J15) Numerical solutions of ill-posed problems in abstract spaces; regularization (65J20)
Cites Work
- Measure theory. Vol. I and II
- Regularization and Variable Selection Via the Elastic Net
- Title not available (Why is that?)
- A Stochastic Approximation Method
- Stable signal recovery from incomplete and inaccurate measurements
- Title not available (Why is that?)
- Geometric properties of Banach spaces and nonlinear iterations
- The mathematics of computerized tomography
- Title not available (Why is that?)
- Regularization methods in Banach spaces.
- Title not available (Why is that?)
- Iterative regularization methods for nonlinear ill-posed problems
- Variational methods in imaging
- Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces
- Nonlinear iterative methods for linear ill-posed problems in Banach spaces
- Convergence rates for the iteratively regularized Gauss-Newton method in Banach spaces
- An Iteration Formula for Fredholm Integral Equations of the First Kind
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- The power method for l\(^p\) norms
- A semismooth Newton method for \(\mathrm{L}^1\) data fitting with automatic choice of regularization parameters and noise calibration
- Conditional stability estimates for ill-posed PDE problems by using interpolation
- Randomized extended Kaczmarz for solving least squares
- One new strategy for a priori choice of regularizing parameters in Tikhonov's regularization
- Local analysis of inverse problems: Hölder stability and iterative reconstruction
- Randomized block Kaczmarz method with projection for solving least squares
- Fast regularizing sequential subspace optimization in Banach spaces
- Variational source conditions and stability estimates for inverse electromagnetic medium scattering problems
- Elastic-net regularization: error estimates and active set methods
- On the degree of ill-posedness for nonlinear problems
- Inexact Newton regularization combined with gradient methods in Banach spaces
- Stochastic mirror descent method for linear ill-posed problems in Banach spaces
- Inverse problems for partial differential equations
- Optimization methods for large-scale machine learning
- Stochastic gradient descent for linear inverse problems in Hilbert spaces
- Convex optimization in sums of Banach spaces
- Variational source conditions in \(L^p\)-spaces
- Linear convergence of the randomized sparse Kaczmarz method
- Variational source conditions and conditional stability estimates for inverse problems in PDEs
- Tikhonov regularization in Hilbert scales under conditional stability assumptions
- A fast subspace optimization method for nonlinear inverse problems in Banach spaces with an application in parameter identification
- On the regularizing property of stochastic gradient descent
- Fast subspace optimization method for nonlinear inverse problems in Banach spaces with uniformly convex penalty terms
- Regularization of inverse problems by two-point gradient methods in Banach spaces
- Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems
- Online learning in optical tomography: a stochastic approach
- The index function and Tikhonov regularization for ill-posed problems
- On stochastic Kaczmarz type methods for solving large scale systems of ill-posed equations
- Stochastic EM methods with variance reduction for penalised PET reconstructions
- On the discrepancy principle for stochastic gradient descent
- Reconstructing the thermal phonon transmission coefficient at solid interfaces in the phonon transport equation
Cited In (9)
- An analysis of stochastic variance reduced gradient for linear inverse problems *
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- On the regularizing property of stochastic gradient descent
- On the convergence rate of projected gradient descent for a back-projection based objective
- A novel two-point Landweber-type method for regularization of non-smooth inverse problems in Banach spaces
- Online learning in optical tomography: a stochastic approach
- Sublinear convergence of a tamed stochastic gradient descent method in Hilbert space
- On the regularization effect of stochastic gradient descent applied to least-squares
- Stochastic gradient descent for linear inverse problems in Hilbert spaces
This page was built for publication: On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6173540)