On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
nonlinear optimizationunconstrained optimizationglobal rate of convergenceNewton's methodtrust-region methodssteepest-descent methodcubic regularizationglobal complexity bounds
Numerical mathematical programming methods (65K05) Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Abstract computational complexity for mathematical programming problems (90C60) Newton-type methods (49M15) Numerical methods based on necessary conditions (49M05) Implicit function theorems; global Newton methods on manifolds (58C15)
- Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
- Cubic regularization of Newton method and its global performance
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- On high-order model regularization for multiobjective optimization
- Complexity of gradient descent for multiobjective optimization
- A regularized Newton method without line search for unconstrained optimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- On High-order Model Regularization for Constrained Optimization
- Complexity of the regularized Newton's method
- Parameter-free accelerated gradient descent for nonconvex minimization
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- A Riemannian dimension-reduced second-order method with application in sensor network localization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- A hybrid inexact regularized Newton and negative curvature method
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
- On a global complexity bound of the Levenberg-marquardt method
- Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization
- Sub-sampled Newton methods
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- A classification of slow convergence near parametric periodic points of discrete dynamical systems
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Worst case complexity of direct search
- Fault detection based on online probability density function estimation
- Oracle complexity of second-order methods for smooth convex optimization
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- A Newton-like trust region method for large-scale unconstrained nonconvex minimization
- Worst case complexity of direct search under convexity
- Complexity of the Newton method for set-valued maps
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- The use of quadratic regularization with a cubic descent condition for unconstrained optimization
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints
- A cubic regularization of Newton's method with finite difference Hessian approximations
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- An inexact regularized Newton framework with a worst-case iteration complexity of \(\mathscr{O}(\varepsilon^{-3/2})\) for nonconvex optimization
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
- Reduced approach for stochastic optimal control problems
- An incremental descent method for multi-objective optimization
- Co-design of linear systems using generalized Benders decomposition
- Newton-type methods for non-convex optimization under inexact Hessian information
- A generalized worst-case complexity analysis for non-monotone line searches
- Global complexity bound of the inexact Levenberg-Marquardt method
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques
- Lower bounds for finding stationary points I
- On high-order multilevel optimization strategies
- Lower bounds for finding stationary points II: first-order methods
- Trust-region methods without using derivatives: worst case complexity and the nonsmooth case
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- On the quadratic convergence of the cubic regularization method under a local error bound condition
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- ARC\(_q\): a new adaptive regularization by cubics
- Inductive manifold learning using structured support vector machine
- Lower bounds for non-convex stochastic optimization
- Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem
- No dimension-free deterministic algorithm computes approximate stationarities of Lipschitzians
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions
- An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
- On large-scale unconstrained optimization and arbitrary regularization
- On regularization and active-set methods with complexity for constrained optimization
- Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg-Marquardt methods
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
- Convergence guarantees for a class of non-convex and non-smooth optimization problems
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- Generalized uniformly optimal methods for nonlinear programming
- Complexity bounds for second-order optimality in unconstrained optimization
- Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- Perseus: a simple and optimal high-order method for variational inequalities
- A note about the complexity of minimizing Nesterov's smooth Chebyshev-Rosenbrock function
- On the complexity of an inexact restoration method for constrained optimization
- On the use of iterative methods in cubic regularization for unconstrained optimization
- Policy optimization for \(\mathcal{H}_2\) linear control with \(\mathcal{H}_\infty\) robustness guarantee: implicit regularization and global convergence
- A modified PRP-type conjugate gradient algorithm with complexity analysis and its application to image restoration problems
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- Improved optimization methods for image registration problems
This page was built for publication: On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3083310)