Inexact proximal -subgradient methods for composite convex optimization problems
From MaRDI portal
Publication:2010107
Abstract: We present two approximate versions of the proximal subgradient method for minimizing the sum of two convex functions (not necessarily differentiable). The algorithms involve, at each iteration, inexact evaluations of the proximal operator and approximate subgradients of the functions (namely: the -subgradients). The methods use different error criteria for approximating the proximal operators. We provide an analysis of the convergence and rate of convergence properties of these methods, considering various stepsize rules, including both, diminishing and constant stepsizes. For the case where one of the functions is smooth, we propose an inexact accelerated version of the proximal gradient method, and prove that the optimal convergence rate for the function values can be achieved.
Recommendations
- Inexact and accelerated proximal point algorithms
- Accelerated and inexact forward-backward algorithms
- A note on the (accelerated) proximal gradient method for composite convex optimization
- Subgradient method with feasible inexact projections for constrained convex optimization problems
- On convergence analysis of dual proximal-gradient methods with approximate gradient for a class of nonsmooth convex minimization problems
Cites work
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 4164577 (Why is no real title available?)
- scientific article; zbMATH DE number 477581 (Why is no real title available?)
- scientific article; zbMATH DE number 3894826 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A UNIFIED FRAMEWORK FOR SOME INEXACT PROXIMAL POINT ALGORITHMS*
- A direct splitting method for nonsmooth variational inequalities
- A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator
- A relaxed-projection splitting algorithm for variational inequalities in Hilbert spaces
- A simplified view of first order methods for optimization
- Accelerated and inexact forward-backward algorithms
- An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods
- An additive subfamily of enlargements of a maximally monotone operator
- An algorithm for total variation minimization and applications
- An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems
- Convex optimization theory.
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems
- Global convergence of splitting methods for nonconvex composite optimization
- Gradient methods for minimizing composite functions
- Incremental proximal methods for large scale convex optimization
- Incremental subgradient methods for nondifferentiable optimization
- Inexact spectral projected gradient methods on convex sets
- Introductory lectures on convex optimization. A basic course.
- Local linear convergence analysis of primal-dual splitting methods
- Monotone (nonlinear) operators in Hilbert space
- Monotone Operators and the Proximal Point Algorithm
- On generalized \(\epsilon \)-subdifferential and radial epiderivative of set-valued mappings
- On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions
- On the Subdifferentiability of Convex Functions
- On the projected subgradient method for nonsmooth convex optimization in a Hilbert space
- Primal recovery from consensus-based dual decomposition for distributed convex optimization
- Proximité et dualité dans un espace hilbertien
- Quasi-Fejérian analysis of some optimization algorithms
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Two algorithms for solving systems of inclusion problems
- \(\epsilon\)-subgradient algorithms for bilevel convex optimization
Cited in
(20)- Composite convex optimization with global and local inexact oracles
- On the equivalence of inexact proximal ALM and ADMM for a class of convex composite programming
- The Inexact Cyclic Block Proximal Gradient Method and Properties of Inexact Proximal Maps
- New inertial proximal gradient methods for unconstrained convex optimization problems
- Computing proximal points of convex functions with inexact subgradients
- Gradient methods for problems with inexact model of the objective
- Optimization and learning with nonlocal calculus
- Inexact proximal stochastic second-order methods for nonconvex composite optimization
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
- Inexact proximal stochastic gradient method for convex composite optimization
- A note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectives
- Accelerated and inexact forward-backward algorithms
- A note on the (accelerated) proximal gradient method for composite convex optimization
- The proximal methods for solving absolute value equation
- An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
- Inexact and accelerated proximal point algorithms
- Consistent approximations in composite optimization
- Primal-dual \(\varepsilon\)-subgradient method for distributed optimization
- On FISTA with a relative error rule
- Principled analyses and design of first-order methods with inexact proximal operators
This page was built for publication: Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2010107)