Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
global convergencenonlinear optimizationunconstrained optimizationlocal convergenceNewton's methodtrust-region methodscubic regularization
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Iterative numerical methods for linear systems (65F10) Newton-type methods (49M15) Numerical computation of solutions to single equations (65H05) Implicit function theorems; global Newton methods on manifolds (58C15)
- An improvement of adaptive cubic regularization method for unconstrained optimization problems
- On the use of iterative methods in cubic regularization for unconstrained optimization
- An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- scientific article; zbMATH DE number 3843081 (Why is no real title available?)
- scientific article; zbMATH DE number 3871040 (Why is no real title available?)
- scientific article; zbMATH DE number 3928227 (Why is no real title available?)
- scientific article; zbMATH DE number 2104353 (Why is no real title available?)
- scientific article; zbMATH DE number 961607 (Why is no real title available?)
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- A Fast Algorithm for the Multiplication of Generalized Hilbert Matrices with Vectors
- A Modified Equation Approach to Constructing Fourth Order Methods for Acoustic Wave Propagation
- Accelerating the cubic regularization of Newton's method on convex problems
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- Analysis of a Symmetric Rank-One Trust Region Method
- Benchmarking optimization software with performance profiles.
- CUTEr and SifDec
- Convergence of quasi-Newton matrices generated by the symmetric rank one update
- Cubic regularization of Newton method and its global performance
- Inexact Newton Methods
- Introductory lectures on convex optimization. A basic course.
- Numerical Optimization
- Sensitivity of trust-region algorithms to their parameters
- Solving the Trust-Region Subproblem using the Lanczos Method
- The “global” convergence of Broyden-like methods with suitable line search
- Truncated-Newton algorithms for large-scale unconstrained optimization
- Trust Region Methods
- Second-order guarantees of distributed gradient algorithms
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- Solving the cubic regularization model by a nested restarting Lanczos method
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization
- On large-scale unconstrained optimization and arbitrary regularization
- On regularization and active-set methods with complexity for constrained optimization
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- The solution of Euclidean norm trust region SQP subproblems via second-order cone programs: an overview and elementary introduction
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- Recent advances in trust region algorithms
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- On the use of the energy norm in trust-region and adaptive cubic regularization subproblems
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Hessian barrier algorithms for non-convex conic optimization
- On convergence of the generalized Lanczos trust-region method for trust-region subproblems
- Complexity bounds for second-order optimality in unconstrained optimization
- On monotonic estimates of the norm of the minimizers of regularized quadratic functions in Krylov spaces
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- An adaptive nonmonotone trust region method based on a modified scalar approximation of the Hessian in the successive quadratic subproblems
- An adaptive high order method for finding third-order critical points of nonconvex optimization
- Recent Theoretical Advances in Non-Convex Optimization
- Perseus: a simple and optimal high-order method for variational inequalities
- Trust region-type method under inexact gradient and inexact Hessian with convergence analysis
- A note about the complexity of minimizing Nesterov's smooth Chebyshev-Rosenbrock function
- Regularized Newton methods for minimizing functions with Hölder continuous hessians
- Convergence of Newton-MR under inexact Hessian information
- A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization
- Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares
- On the complexity of an inexact restoration method for constrained optimization
- An adaptive cubic regularization algorithm for computing H- and Z-eigenvalues of real even-order supersymmetric tensors
- A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
- Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization
- On the use of iterative methods in cubic regularization for unconstrained optimization
- Newton-MR: inexact Newton method with minimum residual sub-problem solver
- On the complexity of solving feasibility problems with regularized models
- Stochastic variance-reduced cubic regularization methods
- \texttt{trlib}: a vector-free implementation of the GLTR method for iterative solution of the trust region problem
- Finding second-order stationary points in constrained minimization: a feasible direction approach
- An SQP method for equality constrained optimization on Hilbert manifolds
- A trust region method for finding second-order stationarity in linearly constrained nonconvex optimization
- On local nonglobal minimum of trust-region subproblem and extension
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization
- An inexact proximal regularization method for unconstrained optimization
- On high-order model regularization for multiobjective optimization
- Complexity analysis of a trust funnel algorithm for equality constrained optimization
- A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities
- Faster Riemannian Newton-type optimization by subsampling and cubic regularization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- On a multilevel Levenberg-Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equations
- Solving Large-Scale Cubic Regularization by a Generalized Eigenvalue Problem
- On High-order Model Regularization for Constrained Optimization
- An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem
- Super-Universal Regularized Newton Method
- Parameter-free accelerated gradient descent for nonconvex minimization
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- Error estimates for iterative algorithms for minimizing regularized quadratic subproblems
- A Riemannian dimension-reduced second-order method with application in sensor network localization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
- Relaxing Kink Qualifications and Proving Convergence Rates in Piecewise Smooth Optimization
- A Unified Efficient Implementation of Trust-region Type Algorithms for Unconstrained Optimization
- Cubic regularization of Newton method and its global performance
- Cubic regularization in symmetric rank-1 quasi-Newton methods
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- \(\rho\)-regularization subproblems: strong duality and an eigensolver-based algorithm
- First-Order Methods for Nonconvex Quadratic Minimization
- Cubic overestimation and secant updating for unconstrained optimization of \(C^{2,1}\) functions
- Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Preconditioning and globalizing conjugate gradients in dual space for quadratically penalized nonlinear-least squares problems
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Gradient descent in the absence of global Lipschitz continuity of the gradients
- Inexact tensor methods and their application to stochastic convex optimization
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search
- A Newton-like trust region method for large-scale unconstrained nonconvex minimization
- Certification of real inequalities: templates and sums of squares
- A new regularized quasi-Newton algorithm for unconstrained optimization
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- Accelerated methods for nonconvex optimization
- The use of quadratic regularization with a cubic descent condition for unconstrained optimization
- A Newton-based method for nonconvex optimization with fast evasion of saddle points
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- Local convergence of tensor methods
- Projected adaptive cubic regularization algorithm with derivative-free filter technique for box constrained optimization
- A cubic regularization of Newton's method with finite difference Hessian approximations
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- Set-limited functions and polynomial-time interior-point methods
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization
- Scalable adaptive cubic regularization methods
This page was built for publication: Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q535013)