An adaptive high order method for finding third-order critical points of nonconvex optimization
From MaRDI portal
Publication:2079692
Abstract: It is well known that finding a global optimum is extremely challenging for nonconvex optimization. There are some recent efforts cite{anandkumar2016efficient, cartis2018second, cartis2020sharp, chen2019high} regarding the optimization methods for computing higher-order critical points, which can exclude the so-called degenerate saddle points and reach a solution with better quality. Desipte theoretical development in cite{anandkumar2016efficient, cartis2018second, cartis2020sharp, chen2019high}, the corresponding numerical experiments are missing. In this paper, we propose an implementable higher-order method, named adaptive high order method (AHOM), that aims to find the third-order critical points. This is achieved by solving an ``easier subproblem and incorporating the adaptive strategy of parameter-tuning in each iteration of the algorithm. The iteration complexity of the proposed method is established. Some preliminary numerical results are provided to show AHOM is able to escape the degenerate saddle points, where the second-order method could possibly get stuck.
Recommendations
- Adaptive Third-Order Methods for Composite Convex Optimization
- Introduction to high-order optimization methods
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Accelerated methods for nonconvex optimization
- A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees
Cites work
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Accelerated methods for nonconvex optimization
- Accelerating the cubic regularization of Newton's method on convex problems
- Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Adaptive subgradient methods for online learning and stochastic optimization
- An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity
- An inexact regularized Newton framework with a worst-case iteration complexity of \(\mathscr{O}(\varepsilon^{-3/2})\) for nonconvex optimization
- An optimal high-order tensor method for convex optimization
- Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- Cubic regularization of Newton method and its global performance
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- Extremal tests for scalar functions of several real variables at degenerate critical points
- Generalized uniformly optimal methods for nonlinear programming
- High-order evaluation complexity for convexly-constrained optimization with non-Lipschitzian group sparsity terms
- Implementable tensor methods in unconstrained convex optimization
- Inexact restoration for derivative-free expensive function minimization and applications
- Introductory lectures on convex optimization. A basic course.
- Lower bounds for finding stationary points I
- On High-order Model Regularization for Constrained Optimization
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints
- Solving the Trust-Region Subproblem using the Lanczos Method
- Some NP-complete problems in quadratic and nonlinear programming
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- Tensor methods for minimizing convex functions with Hölder continuous higher-order derivatives
- The hierarchy of local minimums in polynomial optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
Cited in
(2)
This page was built for publication: An adaptive high order method for finding third-order critical points of nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2079692)