The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
From MaRDI portal
Publication:5266534
DOI10.1137/16M110280XzbMATH Open1370.90260MaRDI QIDQ5266534FDOQ5266534
Publication date: 16 June 2017
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Numerical mathematical programming methods (65K05) Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- Title not available (Why is that?)
- LAPACK Users' Guide
- Computing a Trust Region Step
- CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization
- Benchmarking optimization software with performance profiles.
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- A method for the solution of certain non-linear problems in least squares
- Trust Region Methods
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- A new matrix-free algorithm for the large-scale trust-region subproblem
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Cubic regularization of Newton method and its global performance
- Algorithm 873
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Algebraic rules for quadratic regularization of Newton's method
- Evaluating bound-constrained minimization software
Cited In (32)
- Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points
- On High-order Model Regularization for Constrained Optimization
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- On Regularization and Active-set Methods with Complexity for Constrained Optimization
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
- A cubic regularization of Newton's method with finite difference Hessian approximations
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- A generalized worst-case complexity analysis for non-monotone line searches
- A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- On the worst-case evaluation complexity of non-monotone line search algorithms
- On the Complexity of an Inexact Restoration Method for Constrained Optimization
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization
- Combined methods for solving degenerate unconstrained optimization problems
- A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- On large-scale unconstrained optimization and arbitrary regularization
- Adaptive Quadratically Regularized Newton Method for Riemannian Optimization
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
- Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions
- A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees
- On high-order model regularization for multiobjective optimization
- The spherical quadratic steepest descent (SQSD) method for unconstrained minimization with no explicit line searches
Uses Software
This page was built for publication: The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5266534)