On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
DOI10.1137/090774100zbMATH Open1211.90225OpenAlexW2044207982MaRDI QIDQ3083310FDOQ3083310
Author name not available (Why is that?)
Publication date: 21 March 2011
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/56f4cdfe9a0fcc185a344307c12ac5ee2bbd5117
nonlinear optimizationunconstrained optimizationglobal rate of convergenceNewton's methodtrust-region methodssteepest-descent methodcubic regularizationglobal complexity bounds
Numerical mathematical programming methods (65K05) Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Abstract computational complexity for mathematical programming problems (90C60) Newton-type methods (49M15) Numerical methods based on necessary conditions (49M05) Implicit function theorems; global Newton methods on manifolds (58C15)
Cited In (83)
- Parameter-free accelerated gradient descent for nonconvex minimization
- A Riemannian dimension-reduced second-order method with application in sensor network localization
- A hybrid inexact regularized Newton and negative curvature method
- Complexity of the Newton method for set-valued maps
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
- Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg–Marquardt methods
- An incremental descent method for multi-objective optimization
- Fault Detection Based On Online Probability Density Function Estimation
- Lower bounds for non-convex stochastic optimization
- Policy Optimization for $\mathcal{H}_2$ Linear Control with $\mathcal{H}_\infty$ Robustness Guarantee: Implicit Regularization and Global Convergence
- No dimension-free deterministic algorithm computes approximate stationarities of Lipschitzians
- A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression
- The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions
- Perseus: a simple and optimal high-order method for variational inequalities
- Title not available (Why is that?)
- Complexity of gradient descent for multiobjective optimization
- On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- A regularized Newton method without line search for unconstrained optimization
- On High-Order Multilevel Optimization Strategies
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- On High-order Model Regularization for Constrained Optimization
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- ARCq: a new adaptive regularization by cubics
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- Sub-sampled Newton methods
- On a global complexity bound of the Levenberg-marquardt method
- A classification of slow convergence near parametric periodic points of discrete dynamical systems
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Oracle complexity of second-order methods for smooth convex optimization
- Worst case complexity of direct search
- Worst case complexity of direct search under convexity
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- On Regularization and Active-set Methods with Complexity for Constrained Optimization
- A Newton-like trust region method for large-scale unconstrained nonconvex minimization
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- A cubic regularization of Newton's method with finite difference Hessian approximations
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- Co-design of linear systems using generalized Benders decomposition
- Newton-type methods for non-convex optimization under inexact Hessian information
- A generalized worst-case complexity analysis for non-monotone line searches
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques
- Global complexity bound of the inexact Levenberg-Marquardt method
- Lower bounds for finding stationary points I
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- Trust-region methods without using derivatives: worst case complexity and the nonsmooth case
- Lower bounds for finding stationary points II: first-order methods
- On the Complexity of an Inexact Restoration Method for Constrained Optimization
- A note about the complexity of minimizing Nesterov's smooth Chebyshev–Rosenbrock function
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- Inductive manifold learning using structured support vector machine
- Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- On large-scale unconstrained optimization and arbitrary regularization
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
- Convergence guarantees for a class of non-convex and non-smooth optimization problems
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- Generalized uniformly optimal methods for nonlinear programming
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Complexity bounds for second-order optimality in unconstrained optimization
- Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- On the use of iterative methods in cubic regularization for unconstrained optimization
- A modified PRP-type conjugate gradient algorithm with complexity analysis and its application to image restoration problems
- Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- On high-order model regularization for multiobjective optimization
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- Improved optimization methods for image registration problems
This page was built for publication: On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3083310)