Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results

From MaRDI portal
Publication:535013

DOI10.1007/s10107-009-0286-5zbMath1229.90192OpenAlexW2156005216WikidataQ58185756 ScholiaQ58185756MaRDI QIDQ535013

Nicholas I. M. Gould, Coralia Cartis, Phillipe L. Toint

Publication date: 11 May 2011

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10107-009-0286-5



Related Items

An improvement of adaptive cubic regularization method for unconstrained optimization problems, Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy, Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization, On high-order model regularization for multiobjective optimization, On a multilevel Levenberg–Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equations, On the complexity of solving feasibility problems with regularized models, Local convergence of tensor methods, A cubic regularization of Newton's method with finite difference Hessian approximations, A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron, A new regularized quasi-Newton algorithm for unconstrained optimization, Adaptive Quadratically Regularized Newton Method for Riemannian Optimization, Global convergence rate analysis of unconstrained optimization methods based on probabilistic models, A branch and bound algorithm for the global optimization of Hessian Lipschitz continuous functions, Cubic overestimation and secant updating for unconstrained optimization ofC2, 1functions, Accelerated Methods for NonConvex Optimization, Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization, Smoothness parameter of power of Euclidean norm, Sketched Newton--Raphson, On the use of third-order models with fourth-order regularization for unconstrained optimization, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation, Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems, A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization, Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization, On monotonic estimates of the norm of the minimizers of regularized quadratic functions in Krylov spaces, Preconditioning and globalizing conjugate gradients in dual space for quadratically penalized nonlinear-least squares problems, Finding second-order stationary points in constrained minimization: a feasible direction approach, On the use of the energy norm in trust-region and adaptive cubic regularization subproblems, First-Order Methods for Nonconvex Quadratic Minimization, On High-order Model Regularization for Constrained Optimization, A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization, Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points, Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares, A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization, Newton-type methods for non-convex optimization under inexact Hessian information, A Trust Region Method for Finding Second-Order Stationarity in Linearly Constrained Nonconvex Optimization, On proximal gradient method for the convex problems regularized with the group reproducing kernel norm, Interior-point methods for nonconvex nonlinear programming: cubic regularization, Relaxing Kink Qualifications and Proving Convergence Rates in Piecewise Smooth Optimization, A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, Second-Order Guarantees of Distributed Gradient Algorithms, Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity, Complexity bounds for second-order optimality in unconstrained optimization, On the complexity of finding first-order critical points in constrained nonlinear optimization, A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis, Implementable tensor methods in unconstrained convex optimization, On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition, On High-Order Multilevel Optimization Strategies, Solving Large-Scale Cubic Regularization by a Generalized Eigenvalue Problem, Convergence of Newton-MR under Inexact Hessian Information, trlib: a vector-free implementation of the GLTR method for iterative solution of the trust region problem, On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization, The solution of euclidean norm trust region SQP subproblems via second-order cone programs: an overview and elementary introduction, ARCq: a new adaptive regularization by cubics, Smoothing quadratic regularization method for hemivariational inequalities, Cubic regularization in symmetric rank-1 quasi-Newton methods, On Regularization and Active-set Methods with Complexity for Constrained Optimization, Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization, Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization, Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality, Projected adaptive cubic regularization algorithm with derivative-free filter technique for box constrained optimization, A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization, An inexact proximal regularization method for unconstrained optimization, Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models, Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization, Updating the regularization parameter in the adaptive cubic regularization algorithm, Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization, On solving trust-region and other regularised subproblems in optimization, A Newton-like trust region method for large-scale unconstrained nonconvex minimization, Regional complexity analysis of algorithms for nonconvex smooth optimization, An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem, A new augmented Lagrangian method for equality constrained optimization with simple unconstrained subproblem, Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization, Minimizing uniformly convex functions by cubic regularization of Newton method, Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization, On the use of iterative methods in cubic regularization for unconstrained optimization, Adaptive regularization with cubics on manifolds, New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization, A note on inexact gradient and Hessian conditions for cubic regularized Newton's method, An accelerated first-order method with complexity analysis for solving cubic regularization subproblems, A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization, Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization, Separable cubic modeling and a trust-region strategy for unconstrained minimization with impact in global optimization, On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization, Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization, A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques, On large-scale unconstrained optimization and arbitrary regularization, An adaptive high order method for finding third-order critical points of nonconvex optimization, On global minimizers of quadratic functions with cubic regularization, Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming, On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization, On local nonglobal minimum of trust-region subproblem and extension, Algebraic rules for quadratic regularization of Newton's method, Several accelerated subspace minimization conjugate gradient methods based on regularization model and convergence rate analysis for nonconvex problems, An inexact ℓ1penalty SQP algorithm for PDE-constrained optimization with an application to shape optimization in linear elasticity, Recent advances in trust region algorithms, Certification of real inequalities: templates and sums of squares, \(\rho\)-regularization subproblems: strong duality and an eigensolver-based algorithm, An interior affine scaling cubic regularization algorithm for derivative-free optimization subject to bound constraints, A Unified Efficient Implementation of Trust-region Type Algorithms for Unconstrained Optimization, A note about the complexity of minimizing Nesterov's smooth Chebyshev–Rosenbrock function, Unnamed Item, Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization, Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search, A fast and simple modification of Newton's method avoiding saddle points, A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization, An adaptive cubic regularization algorithm for computing H- and Z-eigenvalues of real even-order supersymmetric tensors, A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees, Newton-MR: inexact Newton method with minimum residual sub-problem solver, Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence, Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound, A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities, Faster Riemannian Newton-type optimization by subsampling and cubic regularization, Super-Universal Regularized Newton Method, Quadratic regularization methods with finite-difference gradient approximations, Gradient regularization of Newton method with Bregman distances, Recent Theoretical Advances in Non-Convex Optimization, An SQP Method for Equality Constrained Optimization on Hilbert Manifolds, Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians, On the Complexity of an Inexact Restoration Method for Constrained Optimization, A concise second-order complexity analysis for unconstrained optimization using high-order regularized models, Error estimates for iterative algorithms for minimizing regularized quadratic subproblems, Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints, Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step, Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact, An adaptive nonmonotone trust region method based on a modified scalar approximation of the Hessian in the successive quadratic subproblems, Optimal control of static contact in finite strain elasticity, Solving the Cubic Regularization Model by a Nested Restarting Lanczos Method, The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization


Uses Software


Cites Work