On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems

From MaRDI portal
Publication:3083310

DOI10.1137/090774100zbMath1211.90225OpenAlexW2044207982MaRDI QIDQ3083310

No author found.

Publication date: 21 March 2011

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://semanticscholar.org/paper/56f4cdfe9a0fcc185a344307c12ac5ee2bbd5117




Related Items

An incremental descent method for multi-objective optimizationStochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracyEvaluation complexity of adaptive cubic regularization methods for convex unconstrained optimizationFinding zeros of Hölder metrically subregular mappings via globally convergent Levenberg–Marquardt methodsOn high-order model regularization for multiobjective optimizationConvergence guarantees for a class of non-convex and non-smooth optimization problemsA cubic regularization of Newton's method with finite difference Hessian approximationsA note about the complexity of minimizing Nesterov's smooth Chebyshev–Rosenbrock functionCo-design of linear systems using generalized Benders decompositionGlobal complexity bound of the inexact Levenberg-Marquardt methodComplexity of the Newton method for set-valued mapsCubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimizationAn efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problemsOn the use of third-order models with fourth-order regularization for unconstrained optimizationThe exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functionsLower bounds for non-convex stochastic optimizationWorst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth OptimizationConvergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problemsA Newton-like method with mixed factorizations and cubic regularization for unconstrained minimizationOn a global complexity bound of the Levenberg-marquardt methodA modified PRP-type conjugate gradient algorithm with complexity analysis and its application to image restoration problemsA nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regressionInductive manifold learning using structured support vector machineGlobal complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problemFault Detection Based On Online Probability Density Function EstimationOn High-order Model Regularization for Constrained OptimizationNewton-type methods for non-convex optimization under inexact Hessian informationLower bounds for finding stationary points IConditional gradient type methods for composite nonlinear and stochastic optimizationImproved optimization methods for image registration problemsLower bounds for finding stationary points II: first-order methodsComplexity bounds for second-order optimality in unconstrained optimizationOn the complexity of finding first-order critical points in constrained nonlinear optimizationUniversal Regularization Methods: Varying the Power, the Smoothness and the AccuracyComplexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian SingularitiesOn the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound ConditionStructured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysisOn High-Order Multilevel Optimization StrategiesSub-sampled Newton methodsARCq: a new adaptive regularization by cubicsOn Regularization and Active-set Methods with Complexity for Constrained OptimizationComplexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex OptimizationCorrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimizationA trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimizationPolicy Optimization for $\mathcal{H}_2$ Linear Control with $\mathcal{H}_\infty$ Robustness Guarantee: Implicit Regularization and Global ConvergenceWorst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized modelsOptimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimizationUpdating the regularization parameter in the adaptive cubic regularization algorithmNonlinear stepsize control, trust regions and regularizations for unconstrained optimizationA classification of slow convergence near parametric periodic points of discrete dynamical systemsUnnamed ItemA Newton-like trust region method for large-scale unconstrained nonconvex minimizationRegional complexity analysis of algorithms for nonconvex smooth optimizationSecond-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimizationA generalized worst-case complexity analysis for non-monotone line searchesComplexity of gradient descent for multiobjective optimizationMini-batch stochastic approximation methods for nonconvex stochastic composite optimizationWorst case complexity of direct search under convexityEvaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order ModelsOn the use of iterative methods in cubic regularization for unconstrained optimizationOn the Complexity of an Inexact Restoration Method for Constrained OptimizationA regularized Newton method without line search for unconstrained optimizationA Newton-CG algorithm with complexity guarantees for smooth unconstrained optimizationWorst case complexity of direct searchLarge-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimizationSharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive ConstraintsOn the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimizationAdaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivativesTrust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth CaseA cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniquesGeneralized uniformly optimal methods for nonlinear programmingOn large-scale unconstrained optimization and arbitrary regularizationIteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexactOracle complexity of second-order methods for smooth convex optimizationWorst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimizationOn the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order MethodsThe Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained OptimizationAccelerated gradient methods for nonconvex nonlinear and stochastic programming