DOI10.1007/s10107-009-0337-yzbMath1229.90193OpenAlexW1994974865WikidataQ58185744 ScholiaQ58185744MaRDI QIDQ652287
Nicholas I. M. Gould, Coralia Cartis, Phillipe L. Toint
Publication date: 14 December 2011
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-009-0337-y
An improvement of adaptive cubic regularization method for unconstrained optimization problems,
Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy,
Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg–Marquardt methods,
On high-order model regularization for multiobjective optimization,
On the complexity of solving feasibility problems with regularized models,
A cubic regularization of Newton's method with finite difference Hessian approximations,
A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron,
Concise complexity analyses for trust region methods,
A new regularized quasi-Newton algorithm for unconstrained optimization,
Global convergence rate analysis of unconstrained optimization methods based on probabilistic models,
Unnamed Item,
Global complexity bound of the inexact Levenberg-Marquardt method,
Cubic overestimation and secant updating for unconstrained optimization ofC2, 1functions,
A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization,
Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization,
Smoothness parameter of power of Euclidean norm,
On the use of third-order models with fourth-order regularization for unconstrained optimization,
Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation,
Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems,
A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization,
On a global complexity bound of the Levenberg-marquardt method,
Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization,
Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem,
On the use of the energy norm in trust-region and adaptive cubic regularization subproblems,
On the worst-case evaluation complexity of non-monotone line search algorithms,
First-Order Methods for Nonconvex Quadratic Minimization,
On High-order Model Regularization for Constrained Optimization,
A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization,
Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points,
Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares,
A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization,
Newton-type methods for non-convex optimization under inexact Hessian information,
A Trust Region Method for Finding Second-Order Stationarity in Linearly Constrained Nonconvex Optimization,
Interior-point methods for nonconvex nonlinear programming: cubic regularization,
A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points,
A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization,
Second-Order Guarantees of Distributed Gradient Algorithms,
On the complexity of finding first-order critical points in constrained nonlinear optimization,
Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy,
A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis,
Implementable tensor methods in unconstrained convex optimization,
Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities,
On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition,
Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis,
On High-Order Multilevel Optimization Strategies,
Solving Large-Scale Cubic Regularization by a Generalized Eigenvalue Problem,
Convergence of Newton-MR under Inexact Hessian Information,
trlib: a vector-free implementation of the GLTR method for iterative solution of the trust region problem,
On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization,
ARCq: a new adaptive regularization by cubics,
Smoothing quadratic regularization method for hemivariational inequalities,
Cubic regularization in symmetric rank-1 quasi-Newton methods,
On Regularization and Active-set Methods with Complexity for Constrained Optimization,
Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization,
Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization,
Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality,
Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions,
Projected adaptive cubic regularization algorithm with derivative-free filter technique for box constrained optimization,
A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization,
Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models,
Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization,
Updating the regularization parameter in the adaptive cubic regularization algorithm,
Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results,
A derivative-free trust-region algorithm for composite nonsmooth optimization,
Regional complexity analysis of algorithms for nonconvex smooth optimization,
Worst-case complexity bounds of directional direct-search methods for multiobjective optimization,
A new augmented Lagrangian method for equality constrained optimization with simple unconstrained subproblem,
Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization,
A generalized worst-case complexity analysis for non-monotone line searches,
Minimizing uniformly convex functions by cubic regularization of Newton method,
A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds,
Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models,
Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization,
On the use of iterative methods in cubic regularization for unconstrained optimization,
Adaptive regularization with cubics on manifolds,
New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization,
On the Complexity of an Inexact Restoration Method for Constrained Optimization,
A concise second-order complexity analysis for unconstrained optimization using high-order regularized models,
A note on inexact gradient and Hessian conditions for cubic regularized Newton's method,
A second-order globally convergent direct-search method and its worst-case complexity,
Worst case complexity of direct search,
Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization,
Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints,
Separable cubic modeling and a trust-region strategy for unconstrained minimization with impact in global optimization,
On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization,
Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives,
Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization,
Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case,
Global complexity bound of the Levenberg–Marquardt method,
A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques,
On large-scale unconstrained optimization and arbitrary regularization,
An adaptive high order method for finding third-order critical points of nonconvex optimization,
On global minimizers of quadratic functions with cubic regularization,
Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary,
On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization,
On local nonglobal minimum of trust-region subproblem and extension,
Algebraic rules for quadratic regularization of Newton's method,
Several accelerated subspace minimization conjugate gradient methods based on regularization model and convergence rate analysis for nonconvex problems,
Recent advances in trust region algorithms,
An interior affine scaling cubic regularization algorithm for derivative-free optimization subject to bound constraints,
A note about the complexity of minimizing Nesterov's smooth Chebyshev–Rosenbrock function,
Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization,
An accelerated first-order method for non-convex optimization on manifolds,
Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search,
An adaptive regularization method in Banach spaces,
Complexity bound of trust-region methods for convex smooth unconstrained multiobjective optimization,
A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization,
An adaptive cubic regularization algorithm for computing H- and Z-eigenvalues of real even-order supersymmetric tensors,
Newton-MR: inexact Newton method with minimum residual sub-problem solver,
A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression,
Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound,
A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees,
A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities,
Super-Universal Regularized Newton Method,
On the complexity of a stochastic Levenberg-Marquardt method,
The evaluation complexity of finding high-order minimizers of nonconvex optimization,
Recent Theoretical Advances in Non-Convex Optimization,
Worst case complexity of direct search under convexity,
Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints,
Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step,
Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact,
Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization,
Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization,
Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization,
On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods,
Solving the Cubic Regularization Model by a Nested Restarting Lanczos Method,
The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization