An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
From MaRDI portal
Publication:2352420
DOI10.1007/s10589-014-9694-4zbMath1341.90102OpenAlexW1986046400MaRDI QIDQ2352420
Publication date: 1 July 2015
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-014-9694-4
homotopy continuationfirst-order methodsparse optimizationproximal gradient methodL1-regularized least-squares
Related Items
Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method ⋮ A secant-based Nesterov method for convex functions ⋮ Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization ⋮ Continuation Methods for Riemannian Optimization ⋮ Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems ⋮ Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization ⋮ First-Order Methods for Problems with $O$(1) Functional Constraints Can Have Almost the Same Convergence Rate as for Unconstrained Problems ⋮ A speed restart scheme for a dynamics with Hessian-driven damping ⋮ Decentralized Gradient Descent Maximization Method for Composite Nonconvex Strongly-Concave Minimax Problems ⋮ A simple homotopy proximal mapping algorithm for compressive sensing ⋮ A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization ⋮ A generic online acceleration scheme for optimization algorithms via relaxation and inertia ⋮ Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach ⋮ Restarting the accelerated coordinate descent method with a rough strong convexity estimate ⋮ Backtracking Strategies for Accelerated Descent Methods with Smooth Composite Objectives ⋮ Unnamed Item ⋮ An Optimal High-Order Tensor Method for Convex Optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- An adaptive accelerated first-order method for convex optimization
- Gradient methods for minimizing composite functions
- ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals
- New bounds on the restricted isometry constant \(\delta _{2k}\)
- Linear convergence of iterative soft-thresholding
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Introductory lectures on convex optimization. A basic course.
- Fine tuning Nesterov's steepest descent algorithm for differentiable convex programming
- Adaptive restart for accelerated gradient schemes
- A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem
- Accelerated Block-coordinate Relaxation for Regularized Optimization
- A Fast Algorithm for Sparse Reconstruction Based on Shrinkage, Subspace Optimization, and Continuation
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Decoding by Linear Programming
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- Atomic Decomposition by Basis Pursuit
- Sparse Reconstruction by Separable Approximation
- Convex Analysis
- Compressed sensing