Potential-function proofs for gradient methods
From MaRDI portal
Publication:5204822
Recommendations
- Convergence of first-order methods via the convex conjugate
- On the convergence analysis of the optimized gradient method
- Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization
- Gradient Convergence in Gradient methods with Errors
- First-order methods for convex optimization
Cited in
(10)- The regularized submodular maximization via the Lyapunov method
- A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization
- A Laplacian approach to \(\ell_1\)-norm minimization
- The rate of convergence of optimization algorithms obtained via discretizations of heavy ball dynamical systems for convex optimization problems
- Recent theoretical advances in decentralized distributed convex optimization
- An optimal gradient method for smooth strongly convex minimization
- No-regret algorithms in on-line learning, games and convex optimization
- A note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectives
- Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization
- Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods
This page was built for publication: Potential-function proofs for gradient methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5204822)