Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
From MaRDI portal
Publication:2802144
Numerical mathematical programming methods (65K05) Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Abstract computational complexity for mathematical programming problems (90C60) Numerical methods based on necessary conditions (49M05)
Recommendations
- Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
Cites work
- scientific article; zbMATH DE number 3381034 (Why is no real title available?)
- A new sequential optimality condition for constrained optimization and algorithmic consequences
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Characterizations of Łojasiewicz inequalities: Subgradient flows, talweg, convexity
- Convergence of a regularized Euclidean residual algorithm for nonlinear least-squares
- Cubic regularization of Newton method and its global performance
- Introductory lectures on convex optimization. A basic course.
- Lagrange Multipliers and Optimality
- On sequential optimality conditions for smooth constrained optimization
- On the Constant Positive Linear Dependence Condition and Its Application to SQP Methods
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming
- On the evaluation complexity of cubic regularization methods for potentially rank-deficient nonlinear least-squares problems and its relevance to constrained nonlinear optimization
- On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization
- Practical augmented Lagrangian methods for constrained optimization
- Worst case complexity of direct search
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
Cited in
(24)- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- On high-order model regularization for multiobjective optimization
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- An augmented Lagrangian algorithm for nonlinear semidefinite programming applied to the covering problem
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization
- On the complexity of an inexact restoration method for constrained optimization
- Direct search based on probabilistic feasible descent for bound and linearly constrained problems
- A control-theoretic perspective on optimal high-order optimization
- A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees
- An adaptive high order method for finding third-order critical points of nonconvex optimization
- Hessian barrier algorithms for non-convex conic optimization
- Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models
- A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms
- A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization
- Ghost penalties in nonconvex constrained optimization: diminishing stepsizes and iteration complexity
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- Optimality conditions and global convergence for nonlinear semidefinite programming
- On High-order Model Regularization for Constrained Optimization
- Complexity analysis of a trust funnel algorithm for equality constrained optimization
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- On regularization and active-set methods with complexity for constrained optimization
- Perseus: a simple and optimal high-order method for variational inequalities
- Error bound conditions and convergence of optimization methods on smooth and proximally smooth manifolds
This page was built for publication: Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2802144)