Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
global convergencenonlinear optimizationunconstrained optimizationlocal convergenceNewton's methodtrust-region methodscubic regularization
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Iterative numerical methods for linear systems (65F10) Newton-type methods (49M15) Numerical computation of solutions to single equations (65H05) Implicit function theorems; global Newton methods on manifolds (58C15)
- An improvement of adaptive cubic regularization method for unconstrained optimization problems
- On the use of iterative methods in cubic regularization for unconstrained optimization
- An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- scientific article; zbMATH DE number 3843081 (Why is no real title available?)
- scientific article; zbMATH DE number 3871040 (Why is no real title available?)
- scientific article; zbMATH DE number 3928227 (Why is no real title available?)
- scientific article; zbMATH DE number 2104353 (Why is no real title available?)
- scientific article; zbMATH DE number 961607 (Why is no real title available?)
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- A Fast Algorithm for the Multiplication of Generalized Hilbert Matrices with Vectors
- A Modified Equation Approach to Constructing Fourth Order Methods for Acoustic Wave Propagation
- Accelerating the cubic regularization of Newton's method on convex problems
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- Analysis of a Symmetric Rank-One Trust Region Method
- Benchmarking optimization software with performance profiles.
- CUTEr and SifDec
- Convergence of quasi-Newton matrices generated by the symmetric rank one update
- Cubic regularization of Newton method and its global performance
- Inexact Newton Methods
- Introductory lectures on convex optimization. A basic course.
- Numerical Optimization
- Sensitivity of trust-region algorithms to their parameters
- Solving the Trust-Region Subproblem using the Lanczos Method
- The “global” convergence of Broyden-like methods with suitable line search
- Truncated-Newton algorithms for large-scale unconstrained optimization
- Trust Region Methods
- On local nonglobal minimum of trust-region subproblem and extension
- On the quadratic convergence of the cubic regularization method under a local error bound condition
- Convergence of Newton-MR under inexact Hessian information
- First-Order Methods for Nonconvex Quadratic Minimization
- On high-order model regularization for multiobjective optimization
- Certification of real inequalities: templates and sums of squares
- A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization
- Error estimates for iterative algorithms for minimizing regularized quadratic subproblems
- New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization
- An accelerated first-order method with complexity analysis for solving cubic regularization subproblems
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization
- \(\rho\)-regularization subproblems: strong duality and an eigensolver-based algorithm
- Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- A note about the complexity of minimizing Nesterov's smooth Chebyshev-Rosenbrock function
- Finding second-order stationary points in constrained minimization: a feasible direction approach
- Adaptive regularization with cubics on manifolds
- Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
- Minimizing uniformly convex functions by cubic regularization of Newton method
- Projected adaptive cubic regularization algorithm with derivative-free filter technique for box constrained optimization
- An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem
- An adaptive high order method for finding third-order critical points of nonconvex optimization
- Local convergence of tensor methods
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- A cubic regularization of Newton's method with finite difference Hessian approximations
- On high-order multilevel optimization strategies
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- Gradient descent finds the cubic-regularized nonconvex Newton step
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- A Newton-based method for nonconvex optimization with fast evasion of saddle points
- Several accelerated subspace minimization conjugate gradient methods based on regularization model and convergence rate analysis for nonconvex problems
- On global minimizers of quadratic functions with cubic regularization
- Second-order guarantees of distributed gradient algorithms
- Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization
- Smoothness parameter of power of Euclidean norm
- A Unified Efficient Implementation of Trust-region Type Algorithms for Unconstrained Optimization
- On monotonic estimates of the norm of the minimizers of regularized quadratic functions in Krylov spaces
- A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization
- Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares
- Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization
- Implementable tensor methods in unconstrained convex optimization
- A note on inexact gradient and Hessian conditions for cubic regularized Newton's method
- On large-scale unconstrained optimization and arbitrary regularization
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- An SQP method for equality constrained optimization on Hilbert manifolds
- Solving the cubic regularization model by a nested restarting Lanczos method
- Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation
- Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- Correction of nonmonotone trust region algorithm based on a modified diagonal regularized quasi-Newton method
- Newton-MR: inexact Newton method with minimum residual sub-problem solver
- Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search
- Scalable adaptive cubic regularization methods
- An adaptive cubic regularization algorithm for computing H- and Z-eigenvalues of real even-order supersymmetric tensors
- A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
- Quadratic regularization methods with finite-difference gradient approximations
- Sketched Newton-Raphson
- Derivative-free separable quadratic modeling and cubic regularization for unconstrained optimization
- Gradient regularization of Newton method with Bregman distances
- Gradient descent in the absence of global Lipschitz continuity of the gradients
- Inexact tensor methods and their application to stochastic convex optimization
- A new complexity metric for nonconvex rank-one generalized matrix completion
- A fast and simple modification of Newton's method avoiding saddle points
- Hessian barrier algorithms for non-convex conic optimization
- On convergence of the generalized Lanczos trust-region method for trust-region subproblems
- A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization
- The solution of Euclidean norm trust region SQP subproblems via second-order cone programs: an overview and elementary introduction
- Set-limited functions and polynomial-time interior-point methods
- Parameter-free accelerated gradient descent for nonconvex minimization
- Super-Universal Regularized Newton Method
- An adaptive nonmonotone trust region method based on a modified scalar approximation of the Hessian in the successive quadratic subproblems
- A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities
- Relaxing Kink Qualifications and Proving Convergence Rates in Piecewise Smooth Optimization
- Faster Riemannian Newton-type optimization by subsampling and cubic regularization
- Convergent least-squares optimization methods for variational data assimilation
- Complexity analysis of a trust funnel algorithm for equality constrained optimization
- Smoothing quadratic regularization method for hemivariational inequalities
- An adaptive conic cubic overestimation method for unconstrained optimization
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- A Riemannian dimension-reduced second-order method with application in sensor network localization
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
- Perseus: a simple and optimal high-order method for variational inequalities
- Trust region-type method under inexact gradient and inexact Hessian with convergence analysis
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization
- Newton-type methods for non-convex optimization under inexact Hessian information
- An interior affine scaling cubic regularization algorithm for derivative-free optimization subject to bound constraints
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Algebraic rules for quadratic regularization of Newton's method
- A trust region method for finding second-order stationarity in linearly constrained nonconvex optimization
- On the use of iterative methods in cubic regularization for unconstrained optimization
This page was built for publication: Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q535013)