Convergence rates of a dual gradient method for constrained linear ill-posed problems
From MaRDI portal
Publication:2159243
Abstract: In this paper we consider a dual gradient method for solving linear ill-posed problems , where is a bounded linear operator from a Banach space to a Hilbert space . A strongly convex penalty function is used in the method to select a solution with desired feature. Under variational source conditions on the sought solution, convergence rates are derived when the method is terminated by either an {it a priori} stopping rule or the discrepancy principle. We also consider an acceleration of the method as well as its various applications.
Recommendations
- Convergence of the gradient method for ill-posed problems
- A new gradient method for ill-posed problems
- Ill-posed problems and the conjugate gradient method: optimal convergence rates in the presence of discretization and modelling errors
- Convergence rate estimation of gradient methodsviaconditional stability of inverse and ill-posed problems
- scientific article; zbMATH DE number 4046975
Cites work
- scientific article; zbMATH DE number 1807400 (Why is no real title available?)
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A convergence analysis of the Landweber iteration for nonlinear ill-posed problems
- A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators
- A revisit on Landweber iteration
- Accelerated Landweber iterations for the solution of ill-posed equations
- An entropic Landweber method for linear ill-posed problems
- Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational Inequalities
- Characterizations of variational source conditions, converse results, and maxisets of spectral regularization methods
- Convergence Rates for Maximum Entropy Regularization
- Convergence analysis of a two-point gradient method for nonlinear ill-posed problems
- Convergence of Best Entropy Estimates
- Convexity and optimization in Banach spaces.
- Existence of variational source conditions for nonlinear inverse problems in Banach spaces
- First-order methods in optimization
- Injectivity and \(\text{weak}^\star\)-to-weak continuity suffice for convergence rates in \(\ell^{1}\)-regularization
- Iteration methods for convexly constrained ill-posed problems in hilbert space
- Iterative methods for nonlinear ill-posed problems in Banach spaces: convergence and applications to parameter identification problems
- Iterative regularization with a general penalty term-theory and application to \(L^{1}\) and \(TV\) regularization
- Landweber iteration of Kaczmarz type with general non-smooth convex penalty functionals
- Landweber-Kaczmarz method in Banach spaces with inexact inner solvers
- Maximum Entropy Regularization for Fredholm Integral Equations of the First Kind
- Maximum entropy method for solving nonlinear ill-posed problems
- Maximum entropy regularization of Fredholm integral equations of the first kind
- Morozov's principle for the augmented Lagrangian method applied to linear inverse problems
- Nesterov’s accelerated gradient method for nonlinear ill-posed problems with a locally convex residual functional
- Non-convex sparse regularisation
- Nonlinear iterative methods for linear ill-posed problems in Banach spaces
- Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces
- On Nesterov acceleration for Landweber iteration of linear ill-posed problems
- On a heuristic stopping rule for the regularization of inverse problems by the augmented Lagrangian method
- Optimal-order convergence of Nesterov acceleration for linear ill-posed problems
- Parameter choice in Banach space regularization under variational inequalities
- Regularization methods in Banach spaces.
- Regularization of ill-posed linear equations by the non-stationary augmented Lagrangian method
- Regularization of inverse problems by two-point gradient methods in Banach spaces
- Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities
- Stability of over-relaxations for the forward-backward algorithm, application to FISTA
- Techniques of variational analysis
- The mathematics of computerized tomography
- The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than \(1/k^2\)
- Tikhonov-regularization of ill-posed linear operator equations on closed convex sets
- Verification of a variational source condition for acoustic inverse medium scattering problems
Cited in
(10)
- Dual gradient method for ill-posed problems using multiple repeated measurement data
- Stochastic mirror descent method for linear ill-posed problems in Banach spaces
- A new accelerated algorithm for ill-conditioned ridge regression problems
- Improved local convergence analysis of the Landweber iteration in Banach spaces
- On convergence rates of proximal alternating direction method of multipliers
- Regularization of ill-posed linear equations by the non-stationary augmented Lagrangian method
- Rate of convergence analysis of dual-based variables decomposition methods for strongly convex problems
- Title not available (Why is no real title available?)
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- A revisit on Nesterov acceleration for linear ill-posed problems
This page was built for publication: Convergence rates of a dual gradient method for constrained linear ill-posed problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2159243)