Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints

From MaRDI portal
Publication:5217594

DOI10.1137/17M1144854zbMath1437.90128arXiv1811.01220OpenAlexW2962948705MaRDI QIDQ5217594

Coralia Cartis, Nicholas I. M. Gould, Phillipe L. Toint

Publication date: 25 February 2020

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1811.01220



Related Items

A nonlinear conjugate gradient method using inexact first-order information, An adaptive regularization method in Banach spaces, Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound, Super-Universal Regularized Newton Method, Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques, Approximating Higher-Order Derivative Tensors Using Secant Updates, The evaluation complexity of finding high-order minimizers of nonconvex optimization, A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization, The impact of noise on evaluation complexity: the deterministic trust-region case, An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity, High-order evaluation complexity for convexly-constrained optimization with non-Lipschitzian group sparsity terms, A concise second-order complexity analysis for unconstrained optimization using high-order regularized models, Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives, Derivative-free optimization methods, An adaptive high order method for finding third-order critical points of nonconvex optimization, On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization



Cites Work