On the convergence of the iterates of proximal gradient algorithm with extrapolation for convex nonsmooth minimization problems
From MaRDI portal
Publication:2010091
DOI10.1007/s10898-019-00789-8zbMath1432.90118OpenAlexW2947334029WikidataQ127859474 ScholiaQ127859474MaRDI QIDQ2010091
Publication date: 3 December 2019
Published in: Journal of Global Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10898-019-00789-8
Related Items
Accelerated smoothing hard thresholding algorithms for \(\ell_0\) regularized nonsmooth convex regression problem ⋮ An extrapolated proximal iteratively reweighted method for nonconvex composite optimization problems
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Convex analysis and nonlinear optimization. Theory and examples.
- Dual extrapolation and its applications to solving variational inequalities and related problems
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- A bundle Bregman proximal method for convex nondifferentiable minimization
- On semi- and subanalytic geometry
- Introductory lectures on convex optimization. A basic course.
- Templates for convex cone problems with applications to sparse signal recovery
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods
- Adaptive restart for accelerated gradient schemes
- Combining fast inertial dynamics for convex optimization with Tikhonov regularization
- Exact matrix completion via convex optimization
- A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Decoding by Linear Programming
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- A Version of the Bundle Idea for Minimizing a Nonsmooth Function: Conceptual Idea, Convergence Analysis, Numerical Results
- Variational Analysis
- Survey of Bundle Methods for Nonsmooth Optimization
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Compressed sensing
This page was built for publication: On the convergence of the iterates of proximal gradient algorithm with extrapolation for convex nonsmooth minimization problems