The use of quadratic regularization with a cubic descent condition for unconstrained optimization
From MaRDI portal
Publication:5266534
Recommendations
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Cubic regularization of Newton method and its global performance
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Complexity bounds for second-order optimality in unconstrained optimization
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
Cites work
- scientific article; zbMATH DE number 852532 (Why is no real title available?)
- A method for the solution of certain non-linear problems in least squares
- A new matrix-free algorithm for the large-scale trust-region subproblem
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Algebraic rules for quadratic regularization of Newton's method
- Algorithm 873
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- Benchmarking optimization software with performance profiles.
- CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization
- Computing a Trust Region Step
- Cubic regularization of Newton method and its global performance
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Evaluating bound-constrained minimization software
- LAPACK Users' Guide
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Trust Region Methods
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
Cited in
(36)- On the quadratic convergence of the cubic regularization method under a local error bound condition
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis
- On high-order model regularization for multiobjective optimization
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- On the complexity of an inexact restoration method for constrained optimization
- A generalized worst-case complexity analysis for non-monotone line searches
- A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees
- On the worst-case evaluation complexity of non-monotone line search algorithms
- Accelerated regularized Newton methods for minimizing composite convex functions
- A cubic regularization of Newton's method with finite difference Hessian approximations
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- The spherical quadratic steepest descent (SQSD) method for unconstrained minimization with no explicit line searches
- Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization
- Cubic regularization of Newton method and its global performance
- On global minimizers of quadratic functions with cubic regularization
- Combined methods for solving degenerate unconstrained optimization problems
- A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression
- A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
- Adaptive quadratically regularized Newton method for Riemannian optimization
- On large-scale unconstrained optimization and arbitrary regularization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- On High-order Model Regularization for Constrained Optimization
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- On regularization and active-set methods with complexity for constrained optimization
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
This page was built for publication: The use of quadratic regularization with a cubic descent condition for unconstrained optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5266534)