On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems

From MaRDI portal
Publication:3083310


DOI10.1137/090774100zbMath1211.90225MaRDI QIDQ3083310

No author found.

Publication date: 21 March 2011

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://semanticscholar.org/paper/56f4cdfe9a0fcc185a344307c12ac5ee2bbd5117


68Q25: Analysis of algorithms and problem complexity

65K05: Numerical mathematical programming methods

90C60: Abstract computational complexity for mathematical programming problems

90C30: Nonlinear programming

49M05: Numerical methods based on necessary conditions

49M15: Newton-type methods

49M37: Numerical methods based on nonlinear programming

58C15: Implicit function theorems; global Newton methods on manifolds


Related Items

On High-order Model Regularization for Constrained Optimization, Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy, Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities, On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition, ARCq: a new adaptive regularization by cubics, On Regularization and Active-set Methods with Complexity for Constrained Optimization, Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization, Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization, Unnamed Item, Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy, Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg–Marquardt methods, On high-order model regularization for multiobjective optimization, On High-Order Multilevel Optimization Strategies, Policy Optimization for $\mathcal{H}_2$ Linear Control with $\mathcal{H}_\infty$ Robustness Guarantee: Implicit Regularization and Global Convergence, Complexity of gradient descent for multiobjective optimization, On the Complexity of an Inexact Restoration Method for Constrained Optimization, Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints, Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact, Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization, On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods, The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization, Convergence guarantees for a class of non-convex and non-smooth optimization problems, A note about the complexity of minimizing Nesterov's smooth Chebyshev–Rosenbrock function, A classification of slow convergence near parametric periodic points of discrete dynamical systems, Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization, Worst case complexity of direct search under convexity, An incremental descent method for multi-objective optimization, Lower bounds for non-convex stochastic optimization, Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization, A modified PRP-type conjugate gradient algorithm with complexity analysis and its application to image restoration problems, A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression, Accelerated gradient methods for nonconvex nonlinear and stochastic programming, Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem, Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization, A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization, Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models, On a global complexity bound of the Levenberg-marquardt method, Complexity bounds for second-order optimality in unconstrained optimization, Updating the regularization parameter in the adaptive cubic regularization algorithm, A regularized Newton method without line search for unconstrained optimization, Worst case complexity of direct search, Inductive manifold learning using structured support vector machine, Co-design of linear systems using generalized Benders decomposition, Global complexity bound of the inexact Levenberg-Marquardt method, Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization, Conditional gradient type methods for composite nonlinear and stochastic optimization, Improved optimization methods for image registration problems, Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis, Sub-sampled Newton methods, Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization, Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization, A Newton-like trust region method for large-scale unconstrained nonconvex minimization, Regional complexity analysis of algorithms for nonconvex smooth optimization, A generalized worst-case complexity analysis for non-monotone line searches, Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives, On large-scale unconstrained optimization and arbitrary regularization, A cubic regularization of Newton's method with finite difference Hessian approximations, An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems, On the use of third-order models with fourth-order regularization for unconstrained optimization, Newton-type methods for non-convex optimization under inexact Hessian information, Lower bounds for finding stationary points I, Lower bounds for finding stationary points II: first-order methods, A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization, Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization, Generalized uniformly optimal methods for nonlinear programming, Oracle complexity of second-order methods for smooth convex optimization, Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems, A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization, On the complexity of finding first-order critical points in constrained nonlinear optimization, On the use of iterative methods in cubic regularization for unconstrained optimization, The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions, Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models, On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization, Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case, A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques, Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization, Complexity of the Newton method for set-valued maps, Fault Detection Based On Online Probability Density Function Estimation