On the worst-case evaluation complexity of non-monotone line search algorithms
From MaRDI portal
Publication:1694392
DOI10.1007/s10589-017-9928-3zbMath1388.90111OpenAlexW2739128850MaRDI QIDQ1694392
Ekkehard W. Sachs, Geovani Nunes Grapiglia
Publication date: 1 February 2018
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-017-9928-3
Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30)
Related Items
A cubic regularization of Newton's method with finite difference Hessian approximations ⋮ Subsampled nonmonotone spectral gradient methods ⋮ Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization ⋮ Quadratic regularization methods with finite-difference gradient approximations ⋮ A subgradient method with non-monotone line search ⋮ A relaxed interior point method for low-rank semidefinite programming problems with applications to matrix completion ⋮ A generalized worst-case complexity analysis for non-monotone line searches ⋮ Conditional gradient method for multiobjective optimization ⋮ Worst-case evaluation complexity of derivative-free nonmonotone line search methods for solving nonlinear systems of equations ⋮ On the inexact scaled gradient projection method ⋮ On the convergence properties of scaled gradient projection methods with non-monotone Armijo-like line searches ⋮ First-order methods for the convex hull membership problem ⋮ An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems
Uses Software
Cites Work
- Unnamed Item
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Worst case complexity of direct search
- Introductory lectures on convex optimization. A basic course.
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- A nonmonotone trust region method based on nonincreasing technique of weighted average of the successive function values
- Incorporating nonmonotone strategies into the trust region method for unconstrained optimization
- Cubic regularization of Newton method and its global performance
- Optimization theory and methods. Nonlinear programming
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- A class of nonmonotone Armijo-type line search method for unconstrained optimization
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Testing Unconstrained Optimization Software
- A Nonmonotone Line Search Technique and Its Application to Unconstrained Optimization
- A Nonmonotone Line Search Technique for Newton’s Method
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
- Benchmarking optimization software with performance profiles.
- Worst case complexity of direct search under convexity
This page was built for publication: On the worst-case evaluation complexity of non-monotone line search algorithms