Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions
From MaRDI portal
Publication:5210738
DOI10.1080/10556788.2019.1673388zbMath1437.90124arXiv1711.08517OpenAlexW2979788615WikidataQ127119928 ScholiaQ127119928MaRDI QIDQ5210738
Publication date: 21 January 2020
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1711.08517
Related Items
A \(J\)-symmetric quasi-Newton method for minimax problems, Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments, Nonsmooth Variants of Powell's BFGS Convergence Theorem, Unnamed Item, An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- Nonsmooth optimization via quasi-Newton methods
- On the limited memory BFGS method for large scale optimization
- Methods of descent for nondifferentiable optimization
- Variational analysis of the Crouzeix ratio
- Minimization of functions having Lipschitz continuous first partial derivatives
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Convergence of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization
- Nonsmooth Variants of Powell's BFGS Convergence Theorem
- Analysis of limited-memory BFGS on a class of nonsmooth convex functions
- A BFGS-SQP method for nonsmooth, nonconvex, constrained optimization and its evaluation using relative minimization profiles
- Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization
- A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization
- Convergence Conditions for Ascent Methods