Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
global convergencenonlinear optimizationunconstrained optimizationlocal convergenceNewton's methodtrust-region methodscubic regularization
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Iterative numerical methods for linear systems (65F10) Newton-type methods (49M15) Numerical computation of solutions to single equations (65H05) Implicit function theorems; global Newton methods on manifolds (58C15)
Recommendations
- An improvement of adaptive cubic regularization method for unconstrained optimization problems
- On the use of iterative methods in cubic regularization for unconstrained optimization
- An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
Cites Work
- scientific article; zbMATH DE number 3843081 (Why is no real title available?)
- scientific article; zbMATH DE number 3871040 (Why is no real title available?)
- scientific article; zbMATH DE number 3928227 (Why is no real title available?)
- scientific article; zbMATH DE number 2104353 (Why is no real title available?)
- scientific article; zbMATH DE number 961607 (Why is no real title available?)
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- A Fast Algorithm for the Multiplication of Generalized Hilbert Matrices with Vectors
- A Modified Equation Approach to Constructing Fourth Order Methods for Acoustic Wave Propagation
- Accelerating the cubic regularization of Newton's method on convex problems
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- Analysis of a Symmetric Rank-One Trust Region Method
- Benchmarking optimization software with performance profiles.
- CUTEr and SifDec
- Convergence of quasi-Newton matrices generated by the symmetric rank one update
- Cubic regularization of Newton method and its global performance
- Inexact Newton Methods
- Introductory lectures on convex optimization. A basic course.
- Numerical Optimization
- Sensitivity of trust-region algorithms to their parameters
- Solving the Trust-Region Subproblem using the Lanczos Method
- The “global” convergence of Broyden-like methods with suitable line search
- Truncated-Newton algorithms for large-scale unconstrained optimization
- Trust Region Methods
Cited In (only showing first 100 items - show all)
- Error estimates for iterative algorithms for minimizing regularized quadratic subproblems
- An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem
- A Unified Efficient Implementation of Trust-region Type Algorithms for Unconstrained Optimization
- First-Order Methods for Nonconvex Quadratic Minimization
- \(\rho\)-regularization subproblems: strong duality and an eigensolver-based algorithm
- Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
- Certification of real inequalities: templates and sums of squares
- A Newton-based method for nonconvex optimization with fast evasion of saddle points
- Projected adaptive cubic regularization algorithm with derivative-free filter technique for box constrained optimization
- Local convergence of tensor methods
- A cubic regularization of Newton's method with finite difference Hessian approximations
- Gradient descent finds the cubic-regularized nonconvex Newton step
- A note on inexact gradient and Hessian conditions for cubic regularized Newton's method
- Minimizing uniformly convex functions by cubic regularization of Newton method
- On high-order multilevel optimization strategies
- Smoothness parameter of power of Euclidean norm
- Adaptive regularization with cubics on manifolds
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization
- Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints
- On the quadratic convergence of the cubic regularization method under a local error bound condition
- New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization
- An accelerated first-order method with complexity analysis for solving cubic regularization subproblems
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- Several accelerated subspace minimization conjugate gradient methods based on regularization model and convergence rate analysis for nonconvex problems
- Implementable tensor methods in unconstrained convex optimization
- On global minimizers of quadratic functions with cubic regularization
- Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization
- Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation
- Second-order guarantees of distributed gradient algorithms
- Solving the cubic regularization model by a nested restarting Lanczos method
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization
- On large-scale unconstrained optimization and arbitrary regularization
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- On monotonic estimates of the norm of the minimizers of regularized quadratic functions in Krylov spaces
- A note about the complexity of minimizing Nesterov's smooth Chebyshev-Rosenbrock function
- An adaptive high order method for finding third-order critical points of nonconvex optimization
- Convergence of Newton-MR under inexact Hessian information
- A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization
- Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares
- Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization
- An SQP method for equality constrained optimization on Hilbert manifolds
- Finding second-order stationary points in constrained minimization: a feasible direction approach
- On local nonglobal minimum of trust-region subproblem and extension
- On high-order model regularization for multiobjective optimization
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization
- Parameter-free accelerated gradient descent for nonconvex minimization
- Super-Universal Regularized Newton Method
- A Riemannian dimension-reduced second-order method with application in sensor network localization
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- Relaxing Kink Qualifications and Proving Convergence Rates in Piecewise Smooth Optimization
- Gradient descent in the absence of global Lipschitz continuity of the gradients
- Inexact tensor methods and their application to stochastic convex optimization
- Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search
- Set-limited functions and polynomial-time interior-point methods
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
- Scalable adaptive cubic regularization methods
- A new complexity metric for nonconvex rank-one generalized matrix completion
- Correction of nonmonotone trust region algorithm based on a modified diagonal regularized quasi-Newton method
- Sketched Newton-Raphson
- Smoothing quadratic regularization method for hemivariational inequalities
- An adaptive conic cubic overestimation method for unconstrained optimization
- Quadratic regularization methods with finite-difference gradient approximations
- Convergent least-squares optimization methods for variational data assimilation
- A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization
- Gradient regularization of Newton method with Bregman distances
- A fast and simple modification of Newton's method avoiding saddle points
- Derivative-free separable quadratic modeling and cubic regularization for unconstrained optimization
- The solution of Euclidean norm trust region SQP subproblems via second-order cone programs: an overview and elementary introduction
- Hessian barrier algorithms for non-convex conic optimization
- On convergence of the generalized Lanczos trust-region method for trust-region subproblems
- An adaptive nonmonotone trust region method based on a modified scalar approximation of the Hessian in the successive quadratic subproblems
- Perseus: a simple and optimal high-order method for variational inequalities
- Trust region-type method under inexact gradient and inexact Hessian with convergence analysis
- An adaptive cubic regularization algorithm for computing H- and Z-eigenvalues of real even-order supersymmetric tensors
- A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
- Newton-MR: inexact Newton method with minimum residual sub-problem solver
- A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities
- Faster Riemannian Newton-type optimization by subsampling and cubic regularization
- Complexity analysis of a trust funnel algorithm for equality constrained optimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- On a multilevel Levenberg-Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equations
- Solving Large-Scale Cubic Regularization by a Generalized Eigenvalue Problem
- On High-order Model Regularization for Constrained Optimization
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
- Cubic regularization of Newton method and its global performance
- Cubic regularization in symmetric rank-1 quasi-Newton methods
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Cubic overestimation and secant updating for unconstrained optimization of \(C^{2,1}\) functions
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Preconditioning and globalizing conjugate gradients in dual space for quadratically penalized nonlinear-least squares problems
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
This page was built for publication: Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q535013)