Stochastic mirror descent method for linear ill-posed problems in Banach spaces
From MaRDI portal
Publication:6101039
DOI10.1088/1361-6420/accd8earXiv2207.06584OpenAlexW4366138915MaRDI QIDQ6101039
Unnamed Author, Qi-nian Jin, Xiliang Lu
Publication date: 31 May 2023
Published in: Inverse Problems (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2207.06584
Related Items
Stochastic linear regularization methods: random discrepancy principle and applications, Dual gradient method for ill-posed problems using multiple repeated measurement data, On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- Regularization methods in Banach spaces.
- On the complexity analysis of randomized block-coordinate descent methods
- Regularization of ill-posed linear equations by the non-stationary augmented Lagrangian method
- Iterative regularization methods for nonlinear ill-posed problems
- Linear convergence of the randomized sparse Kaczmarz method
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Convergence rates of a dual gradient method for constrained linear ill-posed problems
- AIR tools -- a MATLAB package of algebraic iterative reconstruction methods
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Kaczmarz methods for regularizing nonlinear ill-posed equations. I: Convergence analysis
- The Mathematics of Computerized Tomography
- Landweber-Kaczmarz method in Banach spaces with inexact inner solvers
- Landweber iteration of Kaczmarz type with general non-smooth convex penalty functionals
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Convergence Rates for Maximum Entropy Regularization
- Iterative regularization with a general penalty term—theory and application to L 1 and TV regularization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Convergence of Best Entropy Estimates
- Maximum Entropy Regularization for Fredholm Integral Equations of the First Kind
- Preasymptotic convergence of randomized Kaczmarz method
- Optimization Methods for Large-Scale Machine Learning
- On the regularizing property of stochastic gradient descent
- Convergence rates of convex variational regularization
- On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging
- On stochastic Kaczmarz type methods for solving large scale systems of ill-posed equations
- Stochastic gradient descent for linear inverse problems in Hilbert spaces
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- On the discrepancy principle for stochastic gradient descent
- An entropic Landweber method for linear ill-posed problems
- A fast nonstationary iterative method with convex penalty for inverse problems in Hilbert spaces
- On the Convergence of Mirror Descent beyond Stochastic Convex Programming
- A basic course in probability theory
- Faster randomized block sparse Kaczmarz by averaging