Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models
From MaRDI portal
Publication:2802144
DOI10.1137/15M1031631zbMath1335.90094OpenAlexW2337385024MaRDI QIDQ2802144
Ernesto G. Birgin, J. L. Gardenghi, Phillipe L. Toint, Sandra Augusta Santos, José Mario Martínez
Publication date: 25 April 2016
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/15m1031631
Analysis of algorithms and problem complexity (68Q25) Numerical mathematical programming methods (65K05) Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30) Numerical methods based on necessary conditions (49M05) Numerical methods based on nonlinear programming (49M37)
Related Items
On high-order model regularization for multiobjective optimization, A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms, Error bound conditions and convergence of optimization methods on smooth and proximally smooth manifolds, Direct search based on probabilistic feasible descent for bound and linearly constrained problems, Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization, A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees, On High-order Model Regularization for Constrained Optimization, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, On Regularization and Active-set Methods with Complexity for Constrained Optimization, Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization, Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization, Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization, On the Complexity of an Inexact Restoration Method for Constrained Optimization, Optimality conditions and global convergence for nonlinear semidefinite programming, An augmented Lagrangian algorithm for nonlinear semidefinite programming applied to the covering problem, Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact, An adaptive high order method for finding third-order critical points of nonconvex optimization, Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity, A control-theoretic perspective on optimal high-order optimization
Uses Software
Cites Work
- Unnamed Item
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Worst case complexity of direct search
- Introductory lectures on convex optimization. A basic course.
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- Cubic regularization of Newton method and its global performance
- On the Evaluation Complexity of Cubic Regularization Methods for Potentially Rank-Deficient Nonlinear Least-Squares Problems and Its Relevance to Constrained Nonlinear Optimization
- On the Oracle Complexity of First-Order and Derivative-Free Algorithms for Smooth Nonconvex Minimization
- Convergence of a Regularized Euclidean Residual Algorithm for Nonlinear Least-Squares
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- A New Sequential Optimality Condition for Constrained Optimization and Algorithmic Consequences
- Lagrange Multipliers and Optimality
- On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming
- Characterizations of Łojasiewicz inequalities: Subgradient flows, talweg, convexity
- On the Constant Positive Linear Dependence Condition and Its Application to SQP Methods
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
- Practical Augmented Lagrangian Methods for Constrained Optimization
- On sequential optimality conditions for smooth constrained optimization