On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
DOI10.1007/S10107-014-0794-9zbMATH Open1319.90065OpenAlexW2043409612MaRDI QIDQ494338FDOQ494338
Authors: Geovani Nunes Grapiglia, Jinyun Yuan, Yaxiang Yuan
Publication date: 31 August 2015
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-014-0794-9
Recommendations
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- Concise complexity analyses for trust region methods
global convergenceunconstrained optimizationmultiobjective optimizationtrust-region methodsregularization methodsworst-case complexitycomposite nonsmooth optimization
Numerical mathematical programming methods (65K05) Multi-objective and goal programming (90C29) Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Abstract computational complexity for mathematical programming problems (90C60) Newton-type methods (49M15)
Cites Work
- Title not available (Why is that?)
- Optimization theory and methods. Nonlinear programming
- A new trust region method for nonlinear equations
- Title not available (Why is that?)
- Conditions for convergence of trust region algorithms for nonsmooth optimization
- Trust Region Methods
- Optimality conditions for \(C^{1,1}\) vector optimization problems
- Convergence rate of the trust region method for nonlinear equations under local error bound condition
- Convergence of a regularized Euclidean residual algorithm for nonlinear least-squares
- Global Convergence of a a of Trust-Region Methods for Nonconvex Minimization in Hilbert Space
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- Steepest descent methods for multicriteria optimization.
- On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming
- On the superlinear convergence of a trust region algorithm for nonsmooth optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- A trust-region method for unconstrained multiobjective problems with applications in satisficing processes
- Modified Gauss–Newton scheme with worst case guarantees for global performance
- Title not available (Why is that?)
- On the global convergence of trust region algorithms for unconstrained minimization
- A Family of Trust-Region-Based Algorithms for Unconstrained Minimization with Strong Global Convergence Properties
- A model algorithm for composite nondifferentiable optimization problems
- Title not available (Why is that?)
Cited In (47)
- Complexity of gradient descent for multiobjective optimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- On High-order Model Regularization for Constrained Optimization
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- On the complexity of a stochastic Levenberg-Marquardt method
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- An adaptive trust-region method without function evaluations
- The use of quadratic regularization with a cubic descent condition for unconstrained optimization
- Convergence and worst-case complexity of adaptive Riemannian trust-region methods for optimization on manifolds
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- A derivative-free trust-region algorithm for composite nonsmooth optimization
- Concise complexity analyses for trust region methods
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- Worst-case complexity bounds of directional direct-search methods for multiobjective optimization
- An adaptive regularization method in Banach spaces
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization
- A generalized worst-case complexity analysis for non-monotone line searches
- On the convergence of trust region algorithms for unconstrained minimization without derivatives
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- Global complexity bound of the inexact Levenberg-Marquardt method
- On the worst-case evaluation complexity of non-monotone line search algorithms
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- Global complexity bound of the Levenberg-Marquardt method
- ARC\(_q\): a new adaptive regularization by cubics
- Implementable tensor methods in unconstrained convex optimization
- Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- Derivative-free separable quadratic modeling and cubic regularization for unconstrained optimization
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- A brief survey of methods for solving nonlinear least-squares problems
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
- On regularization and active-set methods with complexity for constrained optimization
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- Convergence rates analysis of a multiobjective proximal gradient method
- Recent advances in trust region algorithms
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- Universal nonmonotone line search method for nonconvex multiobjective optimization problems with convex constraints
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive trust-region method on Riemannian manifold
- On the complexity of an inexact restoration method for constrained optimization
- Complexity bound of trust-region methods for convex smooth unconstrained multiobjective optimization
- Inexact restoration with subsampled trust-region methods for finite-sum minimization
- On high-order model regularization for multiobjective optimization
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization
- A structured diagonal Hessian approximation method with evaluation complexity analysis for nonlinear least squares
This page was built for publication: On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q494338)