Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
DOI10.1007/S10107-009-0286-5zbMATH Open1229.90192DBLPjournals/mp/CartisGT11OpenAlexW2156005216WikidataQ58185756 ScholiaQ58185756MaRDI QIDQ535013FDOQ535013
Authors: Coralia Cartis, Nicholas I. M. Gould, Philippe L. Toint
Publication date: 11 May 2011
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-009-0286-5
Recommendations
- An improvement of adaptive cubic regularization method for unconstrained optimization problems
- On the use of iterative methods in cubic regularization for unconstrained optimization
- An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
global convergencenonlinear optimizationunconstrained optimizationlocal convergenceNewton's methodtrust-region methodscubic regularization
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Iterative numerical methods for linear systems (65F10) Newton-type methods (49M15) Numerical computation of solutions to single equations (65H05) Implicit function theorems; global Newton methods on manifolds (58C15)
Cites Work
- Title not available (Why is that?)
- CUTEr and SifDec
- Solving the Trust-Region Subproblem using the Lanczos Method
- Title not available (Why is that?)
- Numerical Optimization
- Title not available (Why is that?)
- Benchmarking optimization software with performance profiles.
- Introductory lectures on convex optimization. A basic course.
- Trust Region Methods
- Convergence of quasi-Newton matrices generated by the symmetric rank one update
- Truncated-Newton algorithms for large-scale unconstrained optimization
- Inexact Newton Methods
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- Analysis of a Symmetric Rank-One Trust Region Method
- Cubic regularization of Newton method and its global performance
- The “global” convergence of Broyden-like methods with suitable line search
- A Modified Equation Approach to Constructing Fourth Order Methods for Acoustic Wave Propagation
- Title not available (Why is that?)
- Accelerating the cubic regularization of Newton's method on convex problems
- Sensitivity of trust-region algorithms to their parameters
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- Title not available (Why is that?)
- A Fast Algorithm for the Multiplication of Generalized Hilbert Matrices with Vectors
Cited In (only showing first 100 items - show all)
- On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- On High-Order Multilevel Optimization Strategies
- Error estimates for iterative algorithms for minimizing regularized quadratic subproblems
- An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- A Unified Efficient Implementation of Trust-region Type Algorithms for Unconstrained Optimization
- First-Order Methods for Nonconvex Quadratic Minimization
- \(\rho\)-regularization subproblems: strong duality and an eigensolver-based algorithm
- Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
- Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search
- Certification of real inequalities: templates and sums of squares
- Second-Order Guarantees of Distributed Gradient Algorithms
- Projected adaptive cubic regularization algorithm with derivative-free filter technique for box constrained optimization
- Local convergence of tensor methods
- A cubic regularization of Newton's method with finite difference Hessian approximations
- A note on inexact gradient and Hessian conditions for cubic regularized Newton's method
- A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization
- Solving the Cubic Regularization Model by a Nested Restarting Lanczos Method
- Adaptive regularization with cubics on manifolds
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization
- Convergence of Newton-MR under Inexact Hessian Information
- Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints
- New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization
- An accelerated first-order method with complexity analysis for solving cubic regularization subproblems
- Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- Several accelerated subspace minimization conjugate gradient methods based on regularization model and convergence rate analysis for nonconvex problems
- An SQP Method for Equality Constrained Optimization on Hilbert Manifolds
- Quadratic regularization methods with finite-difference gradient approximations
- Implementable tensor methods in unconstrained convex optimization
- On global minimizers of quadratic functions with cubic regularization
- Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization
- Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- Riemannian stochastic variance-reduced cubic regularized Newton method for submanifold optimization
- On large-scale unconstrained optimization and arbitrary regularization
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- On monotonic estimates of the norm of the minimizers of regularized quadratic functions in Krylov spaces
- A note about the complexity of minimizing Nesterov's smooth Chebyshev-Rosenbrock function
- An adaptive high order method for finding third-order critical points of nonconvex optimization
- A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization
- Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares
- Finding second-order stationary points in constrained minimization: a feasible direction approach
- On local nonglobal minimum of trust-region subproblem and extension
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Solving Large-Scale Cubic Regularization by a Generalized Eigenvalue Problem
- On High-order Model Regularization for Constrained Optimization
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- ARCq: a new adaptive regularization by cubics
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis
- Cubic regularization of Newton method and its global performance
- Cubic regularization in symmetric rank-1 quasi-Newton methods
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Cubic overestimation and secant updating for unconstrained optimization of \(C^{2,1}\) functions
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Preconditioning and globalizing conjugate gradients in dual space for quadratically penalized nonlinear-least squares problems
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- On Regularization and Active-set Methods with Complexity for Constrained Optimization
- A Newton-like trust region method for large-scale unconstrained nonconvex minimization
- A new regularized quasi-Newton algorithm for unconstrained optimization
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Newton-type methods for non-convex optimization under inexact Hessian information
- Minimizing uniformly convex functions by cubic regularization of Newton method
- A branch and bound algorithm for the global optimization of Hessian Lipschitz continuous functions
- An interior affine scaling cubic regularization algorithm for derivative-free optimization subject to bound constraints
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques
- Algebraic rules for quadratic regularization of Newton's method
- Smoothness parameter of power of Euclidean norm
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- trlib: a vector-free implementation of the GLTR method for iterative solution of the trust region problem
- Interior-point methods for nonconvex nonlinear programming: cubic regularization
- Separable cubic modeling and a trust-region strategy for unconstrained minimization with impact in global optimization
- A new augmented Lagrangian method for equality constrained optimization with simple unconstrained subproblem
- On the Complexity of an Inexact Restoration Method for Constrained Optimization
- On solving trust-region and other regularised subproblems in optimization
- An inexact ℓ1penalty SQP algorithm for PDE-constrained optimization with an application to shape optimization in linear elasticity
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- Title not available (Why is that?)
- A Trust Region Method for Finding Second-Order Stationarity in Linearly Constrained Nonconvex Optimization
- An improvement of adaptive cubic regularization method for unconstrained optimization problems
- Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
- A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization
- Optimal control of static contact in finite strain elasticity
- On a multilevel Levenberg–Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equations
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- On proximal gradient method for the convex problems regularized with the group reproducing kernel norm
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- Adaptive Quadratically Regularized Newton Method for Riemannian Optimization
Uses Software
This page was built for publication: Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q535013)