Parameter-free accelerated gradient descent for nonconvex minimization
From MaRDI portal
Publication:6561381
DOI10.1137/22M1540934zbMATH Open1548.90404MaRDI QIDQ6561381FDOQ6561381
Authors: Naoki Marumo, Akiko Takeda
Publication date: 25 June 2024
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Recommendations
- Accelerated methods for nonconvex optimization
- Newton-type methods for non-convex optimization under inexact Hessian information
- Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- Gradient methods for minimizing composite functions
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Templates for convex cone problems with applications to sparse signal recovery
- A Limited Memory Algorithm for Bound Constrained Optimization
- Title not available (Why is that?)
- Adaptive restart for accelerated gradient schemes
- Title not available (Why is that?)
- First-order methods in optimization
- Title not available (Why is that?)
- Minimization of functions having Lipschitz continuous first partial derivatives
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- Ergodic mirror descent
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Convergence rate analysis of several splitting schemes
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Fast first-order methods for composite convex optimization with backtracking
- Backtracking strategies for accelerated descent methods with smooth composite objectives
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
- Accelerated methods for nonconvex optimization
- Finding approximate local minima faster than gradient descent
- Lectures on convex optimization
- Variable metric inexact line-search-based methods for nonsmooth optimization
- Global rates of convergence for nonconvex optimization on manifolds
- Title not available (Why is that?)
- Regularized Newton methods for minimizing functions with Hölder continuous hessians
- Lower bounds for finding stationary points I
- Adaptive regularization with cubics on manifolds
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- An average curvature accelerated composite gradient method for nonconvex smooth composite optimization problems
- First-order and stochastic optimization methods for machine learning
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
This page was built for publication: Parameter-free accelerated gradient descent for nonconvex minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6561381)