Smoothing accelerated proximal gradient method with fast convergence rate for nonsmooth convex optimization beyond differentiability
From MaRDI portal
Publication:6161546
DOI10.1007/s10957-023-02176-6zbMath1515.65164arXiv2110.01454OpenAlexW4323075888MaRDI QIDQ6161546
Publication date: 27 June 2023
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2110.01454
nonsmooth optimizationconvergence ratesmoothing methodsequential convergenceaccelerated algorithm with extrapolation
Numerical mathematical programming methods (65K05) Convex programming (90C25) Nonlinear programming (90C30)
Cites Work
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- Iterative hard thresholding methods for \(l_0\) regularized convex cone programming
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- On the linear convergence of the alternating direction method of multipliers
- Adaptive smoothing algorithms for nonsmooth composite convex minimization
- Smoothing methods for nonsmooth, nonconvex minimization
- On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- An algorithm for total variation minimization and applications
- Templates for convex cone problems with applications to sparse signal recovery
- First-order optimization algorithms via inertial systems with Hessian driven damping
- Convergence rate of inertial proximal algorithms with general extrapolation and proximal coefficients
- Variable smoothing for convex optimization problems using stochastic gradients
- Optimization problems involving group sparsity terms
- A variable smoothing algorithm for solving convex optimization problems
- Forward-backward splitting with Bregman distances
- Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity
- Strong oracle optimality of folded concave penalized estimation
- Linear Convergence of the Alternating Direction Method of Multipliers for a Class of Convex Optimization Problems
- The Rate of Convergence of Nesterov's Accelerated Forward-Backward Method is Actually Faster Than $1/k^2$
- Accelerated and Inexact Forward-Backward Algorithms
- Smoothing Techniques for Computing Nash Equilibria of Sequential Games
- A Smoothing Direct Search Method for Monte Carlo-Based Bound Constrained Composite Nonsmooth Optimization
- Stability of Over-Relaxations for the Forward-Backward Algorithm, Application to FISTA
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Smoothing Projected Gradient Method and Its Application to Stochastic Linear Complementarity Problems
- A generalized proximal point algorithm for certain non-convex minimization problems
- New Proximal Point Algorithms for Convex Minimization
- Variational Analysis
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Proximal Mapping for Symmetric Penalty and Sparsity
- Smoothing accelerated algorithm for constrained nonsmooth convex optimization problems
- Convergence rate of a relaxed inertial proximal algorithm for convex minimization
- Finite Convergence of Proximal-Gradient Inertial Algorithms Combining Dry Friction with Hessian-Driven Damping
- A Smoothing Active Set Method for Linearly Constrained Non-Lipschitz Nonconvex Optimization
- Fast Proximal Methods via Time Scaling of Damped Inertial Dynamics
- Smoothing SQP Methods for Solving Degenerate Nonsmooth Constrained Optimization Problems with Applications to Bilevel Programs
- Weak convergence of the sequence of successive approximations for nonexpansive mappings
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
- Compressed sensing
This page was built for publication: Smoothing accelerated proximal gradient method with fast convergence rate for nonsmooth convex optimization beyond differentiability