On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients
From MaRDI portal
Publication:315517
DOI10.1007/s11590-015-0936-xzbMath1353.90151OpenAlexW2272094913MaRDI QIDQ315517
Publication date: 21 September 2016
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11590-015-0936-x
global convergencenonlinear programmingconvergence rategradient descent methodHölder continuous gradientupper complexity bound
Related Items (5)
On the complexity of solving feasibility problems with regularized models ⋮ On the quality of first-order approximation of functions with Hölder continuous gradient ⋮ Sobolev gradient preconditioning for elliptic reaction-diffusion problems with some nonsmooth nonlinearities ⋮ Generalized uniformly optimal methods for nonlinear programming ⋮ Cyclic coordinate descent in the Hölder smooth setting
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- First-order methods of smooth convex optimization with inexact oracle
- An optimal method for stochastic composite optimization
- A nonmonotone spectral projected gradient method for large-scale topology optimization problems
- An efficient gradient method using the Yuan steplength
- Universal gradient methods for convex optimization problems
- A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training
- Introductory lectures on convex optimization. A basic course.
- A gradient-based continuous method for large-scale optimization problems
- Bregman operator splitting with variable stepsize for total variation image reconstruction
- Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization
- An alternating direction approximate Newton algorithm for ill-conditioned inverse problems with application to parallel MRI
- The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem
- Optimal methods of smooth convex minimization
- Two-Point Step Size Gradient Methods
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- On the Barzilai and Borwein choice of steplength for the gradient method
- A Nonlinear Conjugate Gradient Algorithm with an Optimal Property and an Improved Wolfe Line Search
- A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search
- The Limited Memory Conjugate Gradient Method
- Some descent three-term conjugate gradient methods and their global convergence
- Convergence Conditions for Ascent Methods
- Convergence Conditions for Ascent Methods. II: Some Corrections
- On Steepest Descent
- Relaxed steepest descent and Cauchy-Barzilai-Borwein method
This page was built for publication: On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients