Scaling techniques for -subgradient methods
From MaRDI portal
Publication:2817840
Abstract: The recent literature on first order methods for smooth optimization shows that significant improvements on the practical convergence behaviour can be achieved with variable stepsize and scaling for the gradient, making this class of algorithms attractive for a variety of relevant applications. In this paper we introduce a variable metric in the context of the -subgradient projection methods for nonsmooth, constrained, convex problems, in combination with two different stepsize selection strategies. We develop the theoretical convergence analysis of the proposed approach and we also discuss practical implementation issues, as the choice of the scaling matrix. In order to illustrate the effectiveness of the method, we consider a specific problem in the image restoration framework and we numerically evaluate the effects of a variable scaling and of the steplength selection strategy on the convergence behaviour.
Recommendations
- Variable metric techniques for forward-backward methods in imaging
- \(\epsilon\)-subgradient algorithms for bilevel convex optimization
- Incremental subgradients for constrained convex optimization: A unified framework and new methods
- Variable metric inexact line-search-based methods for nonsmooth optimization
- New convergence results for the scaled gradient projection method
Cites work
- scientific article; zbMATH DE number 439380 (Why is no real title available?)
- scientific article; zbMATH DE number 3894826 (Why is no real title available?)
- A convergent blind deconvolution method for post-adaptive-optics astronomical imaging
- A descent proximal level bundle method for convex nondifferentiable optimization
- A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science
- A general method to devise maximum-likelihood signal restoration multiplicative algorithms with non-negativity constraints.
- A limited memory steepest descent method
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- A scaled gradient projection method for Bayesian learning in dynamical systems
- A scaled gradient projection method for constrained image deblurring
- Accelerated and inexact forward-backward algorithms
- An \(\mathcal O(1/{k})\) convergence rate for the variable stepsize Bregman operator splitting algorithm
- An affine-scaling interior-point CBB method for box-constrained optimization
- An alternating extragradient method for total variation-based image restoration from Poisson data
- An inertial forward-backward algorithm for monotone inclusions
- Bregman operator splitting with variable stepsize for total variation image reconstruction
- Convergence analysis of deflected conditional approximate subgradient methods
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Convergence of a simple subgradient level method
- Convergence of some algorithms for convex minimization
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
- Convex optimization theory.
- Covariance-Preconditioned Iterative Methods for Nonnegatively Constrained Astronomical Imaging
- Efficient gradient projection methods for edge-preserving removal of Poisson noise
- Error stability properties of generalized gradient-type algorithms
- Image deblurring with Poisson data: from cells to galaxies
- Incremental subgradient methods for nondifferentiable optimization
- Introductory lectures on convex optimization. A basic course.
- Linear convergence of epsilon-subgradient descent methods for a class of convex functions
- Linear convergence of iterative soft-thresholding
- New adaptive stepsize selections in gradient methods
- Non-negatively constrained image deblurring with an inexact interior point method
- Nonlinear total variation based noise removal algorithms
- Nonnegative image reconstruction from sparse Fourier data: a new deconvolution algorithm
- On convergence rates of subgradient optimization methods
- On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions
- On spectral properties of steepest descent methods
- On the convergence of conditional -subgradient methods for convex programs and convex-concave saddle-point problems.
- On the convergence of primal-dual hybrid gradient algorithm
- On the convergence of primal-dual hybrid gradient algorithms for total variation image restoration
- On the convergence of the forward-backward splitting method with linesearches
- On the projected subgradient method for nonsmooth convex optimization in a Hilbert space
- Online learning and online convex optimization
- Penalized maximum likelihood image restoration with positivity constraints: multiplicative algorithms
- Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities
- Quasi-Fejérian analysis of some optimization algorithms
- Restoration of Poissonian Images Using Alternating Direction Optimization
- Scaling techniques for gradient projection-type methods in astronomical image deblurring
- The Efficiency of Subgradient Projection Methods for Convex Optimization, Part I: General Level Methods
- The Efficiency of Subgradient Projection Methods for Convex Optimization, Part II: Implementations and Extensions
- The cyclic Barzilai-–Borwein method for unconstrained optimization
- Total variation-penalized Poisson likelihood estimation for ill-posed problems
- Two-Point Step Size Gradient Methods
- Variable metric forward-backward algorithm for minimizing the sum of a differentiable function and a convex function
- Variable metric forward-backward splitting with applications to monotone inclusions in duality
- Variable metric inexact line-search-based methods for nonsmooth optimization
- Variable metric quasi-Fejér monotonicity
- stochastic quasigradient methods and their application to system optimization†
Cited in
(7)- A Subspace Decomposition Principle for Scaled Gradient Projection Methods: Local Theory
- \(\epsilon\)-subgradient algorithms for bilevel convex optimization
- A two-metric variable scaled forward-backward algorithm for \(\ell_0\) optimization problem and its applications
- Stochastic primal-dual hybrid gradient algorithm with adaptive step sizes
- Variable metric techniques for forward-backward methods in imaging
- Efficient position estimation of 3D fluorescent spherical beads in confocal microscopy via Poisson denoising
- Modified Fejér sequences and applications
This page was built for publication: Scaling techniques for \(\epsilon\)-subgradient methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2817840)