Complexity bounds for second-order optimality in unconstrained optimization
From MaRDI portal
Recommendations
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- On second-order conditions in unconstrained optimization
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- OFFO minimization algorithms for second-order optimality and their complexity
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- scientific article; zbMATH DE number 1460258
- Oracle complexity of second-order methods for smooth convex optimization
- Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization
- A second order method for unconstrained optimization
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
Cites work
- scientific article; zbMATH DE number 501506 (Why is no real title available?)
- Accelerating the cubic regularization of Newton's method on convex problems
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Approximation algorithms for indefinite quadratic programming
- Black-Box Complexity of Local Minimization
- Cubic regularization of Newton method and its global performance
- Introductory lectures on convex optimization. A basic course.
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Trust Region Methods
- Trust-region and other regularisations of linear least-squares problems
- Worst case complexity of direct search
Cited in
(46)- Finding stationary points on bounded-rank matrices: a geometric hurdle and a smooth remedy
- Trust-region Newton-CG with strong second-order complexity guarantees for nonconvex optimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Newton-type methods for non-convex optimization under inexact Hessian information
- A second-order globally convergent direct-search method and its worst-case complexity
- Convergence of Newton-MR under inexact Hessian information
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization
- Lower bounds for non-convex stochastic optimization
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- OFFO minimization algorithms for second-order optimality and their complexity
- A note about the complexity of minimizing Nesterov's smooth Chebyshev-Rosenbrock function
- Adaptive regularization with cubics on manifolds
- Scalable adaptive cubic regularization methods
- Complexity of unconstrained \(L_2 - L_p\) minimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- The use of quadratic regularization with a cubic descent condition for unconstrained optimization
- On the complexity of an inexact restoration method for constrained optimization
- Hessian barrier algorithms for non-convex conic optimization
- A geometric analysis of phase retrieval
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- Gradient descent finds the cubic-regularized nonconvex Newton step
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- Cubic overestimation and secant updating for unconstrained optimization of \(C^{2,1}\) functions
- Detecting negative eigenvalues of exact and approximate Hessian matrices in optimization
- Stochastic variance-reduced cubic regularization methods
- Lower bounds for finding stationary points II: first-order methods
- Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- Convergence and worst-case complexity of adaptive Riemannian trust-region methods for optimization on manifolds
- Using improved directions of negative curvature for the solution of bound-constrained nonconvex problems
- Set-limited functions and polynomial-time interior-point methods
- Sharp restricted isometry bounds for the inexistence of spurious local minima in nonconvex matrix recovery
- A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds
- A note on inexact gradient and Hessian conditions for cubic regularized Newton's method
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- Lower bounds for finding stationary points I
- Computing second-order points under equality constraints: revisiting Fletcher's augmented Lagrangian
- On regularization and active-set methods with complexity for constrained optimization
- Trust region-type method under inexact gradient and inexact Hessian with convergence analysis
- Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints
This page was built for publication: Complexity bounds for second-order optimality in unconstrained optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q657654)