Parameter-free accelerated gradient descent for nonconvex minimization
From MaRDI portal
Publication:6561381
Recommendations
- Accelerated methods for nonconvex optimization
- Newton-type methods for non-convex optimization under inexact Hessian information
- Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- Gradient methods for minimizing composite functions
Cites work
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 45971 (Why is no real title available?)
- scientific article; zbMATH DE number 2152541 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Limited Memory Algorithm for Bound Constrained Optimization
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Accelerated methods for nonconvex optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Adaptive regularization with cubics on manifolds
- Adaptive restart for accelerated gradient schemes
- An average curvature accelerated composite gradient method for nonconvex smooth composite optimization problems
- Backtracking strategies for accelerated descent methods with smooth composite objectives
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
- Convergence rate analysis of several splitting schemes
- Ergodic mirror descent
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Fast first-order methods for composite convex optimization with backtracking
- Finding approximate local minima faster than gradient descent
- First-order and stochastic optimization methods for machine learning
- First-order methods in optimization
- Global rates of convergence for nonconvex optimization on manifolds
- Lectures on convex optimization
- Lower bounds for finding stationary points I
- Minimization of functions having Lipschitz continuous first partial derivatives
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Regularized Newton methods for minimizing functions with Hölder continuous hessians
- Templates for convex cone problems with applications to sparse signal recovery
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- Variable metric inexact line-search-based methods for nonsmooth optimization
This page was built for publication: Parameter-free accelerated gradient descent for nonconvex minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6561381)