On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
global convergenceunconstrained optimizationmultiobjective optimizationtrust-region methodsregularization methodsworst-case complexitycomposite nonsmooth optimization
Numerical mathematical programming methods (65K05) Multi-objective and goal programming (90C29) Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Abstract computational complexity for mathematical programming problems (90C60) Newton-type methods (49M15)
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- Concise complexity analyses for trust region methods
- scientific article; zbMATH DE number 3868525 (Why is no real title available?)
- scientific article; zbMATH DE number 3928227 (Why is no real title available?)
- scientific article; zbMATH DE number 3503002 (Why is no real title available?)
- scientific article; zbMATH DE number 1932416 (Why is no real title available?)
- A Family of Trust-Region-Based Algorithms for Unconstrained Minimization with Strong Global Convergence Properties
- A model algorithm for composite nondifferentiable optimization problems
- A new trust region method for nonlinear equations
- A trust-region method for unconstrained multiobjective problems with applications in satisficing processes
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Conditions for convergence of trust region algorithms for nonsmooth optimization
- Convergence of a regularized Euclidean residual algorithm for nonlinear least-squares
- Convergence rate of the trust region method for nonlinear equations under local error bound condition
- Global Convergence of a a of Trust-Region Methods for Nonconvex Minimization in Hilbert Space
- Modified Gauss–Newton scheme with worst case guarantees for global performance
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming
- On the global convergence of trust region algorithms for unconstrained minimization
- On the superlinear convergence of a trust region algorithm for nonsmooth optimization
- Optimality conditions for \(C^{1,1}\) vector optimization problems
- Optimization theory and methods. Nonlinear programming
- Steepest descent methods for multicriteria optimization.
- Trust Region Methods
- On high-order model regularization for multiobjective optimization
- Complexity of gradient descent for multiobjective optimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- On High-order Model Regularization for Constrained Optimization
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- On the complexity of a stochastic Levenberg-Marquardt method
- An adaptive trust-region method without function evaluations
- Concise complexity analyses for trust region methods
- The use of quadratic regularization with a cubic descent condition for unconstrained optimization
- A derivative-free trust-region algorithm for composite nonsmooth optimization
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- Convergence and worst-case complexity of adaptive Riemannian trust-region methods for optimization on manifolds
- Worst-case complexity bounds of directional direct-search methods for multiobjective optimization
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization
- An adaptive regularization method in Banach spaces
- A generalized worst-case complexity analysis for non-monotone line searches
- On the convergence of trust region algorithms for unconstrained minimization without derivatives
- Global complexity bound of the inexact Levenberg-Marquardt method
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- On the worst-case evaluation complexity of non-monotone line search algorithms
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- Global complexity bound of the Levenberg-Marquardt method
- ARC\(_q\): a new adaptive regularization by cubics
- Implementable tensor methods in unconstrained convex optimization
- Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- A brief survey of methods for solving nonlinear least-squares problems
- Derivative-free separable quadratic modeling and cubic regularization for unconstrained optimization
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
- On regularization and active-set methods with complexity for constrained optimization
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- Recent advances in trust region algorithms
- Convergence rates analysis of a multiobjective proximal gradient method
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Universal nonmonotone line search method for nonconvex multiobjective optimization problems with convex constraints
- Adaptive trust-region method on Riemannian manifold
- On the complexity of an inexact restoration method for constrained optimization
- Inexact restoration with subsampled trust-region methods for finite-sum minimization
- Complexity bound of trust-region methods for convex smooth unconstrained multiobjective optimization
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization
- A structured diagonal Hessian approximation method with evaluation complexity analysis for nonlinear least squares
This page was built for publication: On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q494338)