Regional complexity analysis of algorithms for nonconvex smooth optimization
From MaRDI portal
nonconvex optimizationnonlinear optimizationworst-case iteration complexityworst-case evaluation complexityregional complexity analysis
Numerical mathematical programming methods (65K05) Numerical optimization and variational techniques (65K10) Complexity and performance of numerical algorithms (65Y20) Analysis of algorithms and problem complexity (68Q25) Nonconvex programming, global optimization (90C26) Abstract computational complexity for mathematical programming problems (90C60)
Abstract: A strategy is proposed for characterizing the worst-case performance of algorithms for solving nonconvex smooth optimization problems. Contemporary analyses characterize worst-case performance by providing, under certain assumptions on an objective function, an upper bound on the number of iterations (or function or derivative evaluations) required until a pth-order stationarity condition is approximately satisfied. This arguably leads to conservative characterizations based on anomalous objectives rather than on ones that are typically encountered in practice. By contrast, the strategy proposed in this paper characterizes worst-case performance separately over regions comprising a search space. These regions are defined generically based on properties of derivative values. In this manner, one can analyze the worst-case performance of an algorithm independently from any particular class of objectives. Then, once given a class of objectives, one can obtain an informative, fine-tuned complexity analysis merely by delineating the types of regions that comprise the search spaces for functions in the class. Regions defined by first- and second-order derivatives are discussed in detail and example complexity analyses are provided for a few fundamental first- and second-order algorithms when employed to minimize convex and nonconvex objectives of interest. It is also explained how the strategy can be generalized to regions defined by higher-order derivatives and for analyzing the behavior of higher-order algorithms.
Recommendations
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Evaluation complexity of algorithms for nonconvex optimization. Theory, computation and perspectives
- On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
- Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints
Cites work
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- scientific article; zbMATH DE number 3313108 (Why is no real title available?)
- A Newton-based method for nonconvex optimization with fast evasion of saddle points
- A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- ARC\(_q\): a new adaptive regularization by cubics
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- An inexact regularized Newton framework with a worst-case iteration complexity of \(\mathscr{O}(\varepsilon^{-3/2})\) for nonconvex optimization
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
- Concise complexity analyses for trust region methods
- Cubic regularization of Newton method and its global performance
- Exploiting negative curvature in deterministic and stochastic optimization
- First-order methods almost always avoid strict saddle points
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Introductory lectures on convex optimization. A basic course.
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- On the average number of steps of the simplex method of linear programming
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Smoothed analysis of algorithms
- The Average number of pivot steps required by the Simplex-Method is polynomial
- The use of quadratic regularization with a cubic descent condition for unconstrained optimization
- Trust Region Methods
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
Cited in
(3)
This page was built for publication: Regional complexity analysis of algorithms for nonconvex smooth optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2020615)