Convergence rates of a dual gradient method for constrained linear ill-posed problems
From MaRDI portal
Publication:2159243
DOI10.1007/S00211-022-01300-4zbMATH Open1498.65077arXiv2206.07379OpenAlexW4283026013MaRDI QIDQ2159243FDOQ2159243
Publication date: 28 July 2022
Published in: Numerische Mathematik (Search for Journal in Brave)
Abstract: In this paper we consider a dual gradient method for solving linear ill-posed problems , where is a bounded linear operator from a Banach space to a Hilbert space . A strongly convex penalty function is used in the method to select a solution with desired feature. Under variational source conditions on the sought solution, convergence rates are derived when the method is terminated by either an {it a priori} stopping rule or the discrepancy principle. We also consider an acceleration of the method as well as its various applications.
Full work available at URL: https://arxiv.org/abs/2206.07379
Numerical solutions to equations with nonlinear operators (65J15) Numerical solutions of ill-posed problems in abstract spaces; regularization (65J20)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- First-Order Methods in Optimization
- Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational Inequalities
- Techniques of variational analysis
- The mathematics of computerized tomography
- Morozov's principle for the augmented Lagrangian method applied to linear inverse problems
- Parameter choice in Banach space regularization under variational inequalities
- Regularization methods in Banach spaces.
- A convergence analysis of the Landweber iteration for nonlinear ill-posed problems
- Convexity and Optimization in Banach Spaces
- Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces
- Nonlinear iterative methods for linear ill-posed problems in Banach spaces
- Iteration methods for convexly constrained ill-posed problems in hilbert space
- A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators
- Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities
- Tikhonov-regularization of ill-posed linear operator equations on closed convex sets
- Landweber iteration of Kaczmarz type with general non-smooth convex penalty functionals
- Iterative methods for nonlinear ill-posed problems in Banach spaces: convergence and applications to parameter identification problems
- Verification of a variational source condition for acoustic inverse medium scattering problems
- Convergence of Best Entropy Estimates
- Maximum entropy regularization of Fredholm integral equations of the first kind
- Accelerated Landweber iterations for the solution of ill-posed equations
- Non-convex sparse regularisation
- Regularization of ill-posed linear equations by the non-stationary augmented Lagrangian method
- Convergence Rates for Maximum Entropy Regularization
- Maximum entropy method for solving nonlinear ill-posed problems
- The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than \(1/k^2\)
- Injectivity and \(\text{weak}^\star\)-to-weak continuity suffice for convergence rates in \(\ell^{1}\)-regularization
- On Nesterov acceleration for Landweber iteration of linear ill-posed problems
- Maximum Entropy Regularization for Fredholm Integral Equations of the First Kind
- Existence of variational source conditions for nonlinear inverse problems in Banach spaces
- Landweber-Kaczmarz method in Banach spaces with inexact inner solvers
- Characterizations of Variational Source Conditions, Converse Results, and Maxisets of Spectral Regularization Methods
- Convergence analysis of a two-point gradient method for nonlinear ill-posed problems
- Iterative regularization with a general penalty term—theory and application to L 1 and TV regularization
- Stability of Over-Relaxations for the Forward-Backward Algorithm, Application to FISTA
- Nesterov’s accelerated gradient method for nonlinear ill-posed problems with a locally convex residual functional
- Regularization of inverse problems by two-point gradient methods in Banach spaces
- On a heuristic stopping rule for the regularization of inverse problems by the augmented Lagrangian method
- A revisit on Landweber iteration
- Optimal-order convergence of Nesterov acceleration for linear ill-posed problems*
- An entropic Landweber method for linear ill-posed problems
Cited In (7)
- Dual gradient method for ill-posed problems using multiple repeated measurement data
- Stochastic mirror descent method for linear ill-posed problems in Banach spaces
- Improved local convergence analysis of the Landweber iteration in Banach spaces
- On convergence rates of proximal alternating direction method of multipliers
- Rate of convergence analysis of dual-based variables decomposition methods for strongly convex problems
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- A revisit on Nesterov acceleration for linear ill-posed problems
This page was built for publication: Convergence rates of a dual gradient method for constrained linear ill-posed problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2159243)