Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
DOI10.1007/S10107-009-0337-YzbMATH Open1229.90193OpenAlexW1994974865WikidataQ58185744 ScholiaQ58185744MaRDI QIDQ652287FDOQ652287
Authors: Coralia Cartis, Nicholas I. M. Gould, Philippe L. Toint
Publication date: 14 December 2011
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-009-0337-y
Recommendations
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- ARC\(_q\): a new adaptive regularization by cubics
- On the use of iterative methods in cubic regularization for unconstrained optimization
- Complexity bounds for second-order optimality in unconstrained optimization
nonlinear optimizationunconstrained optimizationglobal rate of convergenceNewton's methodtrust-region methodscubic regularizationglobal complexity bounds
Numerical mathematical programming methods (65K05) Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Abstract computational complexity for mathematical programming problems (90C60) Newton-type methods (49M15) Implicit function theorems; global Newton methods on manifolds (58C15)
Cites Work
- Title not available (Why is that?)
- Numerical Optimization
- Title not available (Why is that?)
- Introductory lectures on convex optimization. A basic course.
- Trust Region Methods
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- Cubic regularization of Newton method and its global performance
- Accelerating the cubic regularization of Newton's method on convex problems
- A recursive Formula-trust-region method for bound-constrained nonlinear optimization
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
Cited In (only showing first 100 items - show all)
- On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- First-Order Methods for Nonconvex Quadratic Minimization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search
- A Newton-based method for nonconvex optimization with fast evasion of saddle points
- Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints
- Projected adaptive cubic regularization algorithm with derivative-free filter technique for box constrained optimization
- A cubic regularization of Newton's method with finite difference Hessian approximations
- Accelerated regularized Newton methods for minimizing composite convex functions
- Gradient descent finds the cubic-regularized nonconvex Newton step
- A note on inexact gradient and Hessian conditions for cubic regularized Newton's method
- Newton-type methods for non-convex optimization under inexact Hessian information
- A generalized worst-case complexity analysis for non-monotone line searches
- Minimizing uniformly convex functions by cubic regularization of Newton method
- A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds
- On high-order multilevel optimization strategies
- Smoothness parameter of power of Euclidean norm
- Adaptive regularization with cubics on manifolds
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- Trust-region Newton-CG with strong second-order complexity guarantees for nonconvex optimization
- A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization
- Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints
- New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization
- Several accelerated subspace minimization conjugate gradient methods based on regularization model and convergence rate analysis for nonconvex problems
- Implementable tensor methods in unconstrained convex optimization
- On global minimizers of quadratic functions with cubic regularization
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization
- Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation
- Second-order guarantees of distributed gradient algorithms
- Solving the cubic regularization model by a nested restarting Lanczos method
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization
- On large-scale unconstrained optimization and arbitrary regularization
- Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg-Marquardt methods
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities
- A note about the complexity of minimizing Nesterov's smooth Chebyshev-Rosenbrock function
- An adaptive high order method for finding third-order critical points of nonconvex optimization
- Convergence of Newton-MR under inexact Hessian information
- A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization
- Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares
- Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization
- Complexity bound of trust-region methods for convex smooth unconstrained multiobjective optimization
- A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees
- Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
- On local nonglobal minimum of trust-region subproblem and extension
- On high-order model regularization for multiobjective optimization
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization
- Complexity analysis of a trust funnel algorithm for equality constrained optimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Solving Large-Scale Cubic Regularization by a Generalized Eigenvalue Problem
- On High-order Model Regularization for Constrained Optimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
- Cubic regularization in symmetric rank-1 quasi-Newton methods
- On a global complexity bound of the Levenberg-marquardt method
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Cubic overestimation and secant updating for unconstrained optimization of \(C^{2,1}\) functions
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Worst case complexity of direct search
- Worst case complexity of direct search under convexity
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- The use of quadratic regularization with a cubic descent condition for unconstrained optimization
- A new regularized quasi-Newton algorithm for unconstrained optimization
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- A derivative-free trust-region algorithm for composite nonsmooth optimization
- Concise complexity analyses for trust region methods
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- Worst-case complexity bounds of directional direct-search methods for multiobjective optimization
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- An interior affine scaling cubic regularization algorithm for derivative-free optimization subject to bound constraints
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques
- Algebraic rules for quadratic regularization of Newton's method
- Global complexity bound of the inexact Levenberg-Marquardt method
- On the worst-case evaluation complexity of non-monotone line search algorithms
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Trust-region methods without using derivatives: worst case complexity and the nonsmooth case
- Global complexity bound of the Levenberg-Marquardt method
- Interior-point methods for nonconvex nonlinear programming: cubic regularization
- Separable cubic modeling and a trust-region strategy for unconstrained minimization with impact in global optimization
- A new augmented Lagrangian method for equality constrained optimization with simple unconstrained subproblem
- ARC\(_q\): a new adaptive regularization by cubics
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- An improvement of adaptive cubic regularization method for unconstrained optimization problems
- Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem
- A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization
- Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
This page was built for publication: Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q652287)