A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms
From MaRDI portal
(Redirected from Publication:1639726)
Recommendations
- A second-order sequential optimality condition associated to the convergence of optimization algorithms
- On second-order optimality conditions for nonlinear programming
- A new sequential optimality condition for constrained optimization and algorithmic consequences
- Local convergence of exact and inexact augmented Lagrangian methods under the second-order sufficient optimality condition
- Second-order global optimality conditions for convex composite optimization
Cites work
- scientific article; zbMATH DE number 1817650 (Why is no real title available?)
- scientific article; zbMATH DE number 1818892 (Why is no real title available?)
- scientific article; zbMATH DE number 107545 (Why is no real title available?)
- scientific article; zbMATH DE number 1502618 (Why is no real title available?)
- scientific article; zbMATH DE number 3307153 (Why is no real title available?)
- A cone-continuity constraint qualification and algorithmic consequences
- A new sequential optimality condition for constrained optimization and algorithmic consequences
- A new trust-region algorithm for equality constrained optimization
- A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems
- A primal-dual algorithm for nonlinear programming exploiting negative curvature directions
- A stabilized SQP method: global convergence
- An augmented Lagrangian interior-point method using directions of negative curvature
- Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Convergence to Second-Order Stationary Points of a Primal-Dual Algorithm Model for Nonlinear Programming
- Convergence to second-order stationary points in inequality constrained optimization
- Convergent Infeasible Interior-Point Trust-Region Methods for Constrained Minimization
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- Erratum to: ``A second-order sequential optimality condition associated to the convergence of optimization algorithms
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- Global Convergence of a Class of Trust Region Algorithms for Optimization with Simple Bounds
- Interior-point \(\ell_2\)-penalty methods for nonlinear programming with strong global convergence properties
- On Augmented Lagrangian Methods with General Lower-Level Constraints
- On affine scaling algorithms for nonconvex quadratic programming
- On second-order optimality conditions for nonlinear programming
- On sequential optimality conditions for smooth constrained optimization
- On the Convergence Theory of Trust-Region-Based Algorithms for Equality-Constrained Optimization
- On the complexity of approximating a KKT point of quadratic programming
- On the global convergence of interior-pointnonlinear programming algorithms
- Practical augmented Lagrangian methods for constrained optimization
- Second-order negative-curvature methods for box-constrained and general constrained optimization
- Some theoretical limitations of second-order algorithms for smooth constrained optimization
- Strict Constraint Qualifications and Sequential Optimality Conditions for Constrained Optimization
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Trust Region Methods
- Two new weak constraint qualifications and applications
Cited in
(14)- Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points
- On the behavior of Lagrange multipliers in convex and nonconvex infeasible interior point methods
- On the weak second-order optimality condition for nonlinear semidefinite and second-order cone programming
- New sequential optimality conditions for mathematical programs with complementarity constraints and algorithmic consequences
- On optimality conditions for nonlinear conic programming
- Optimality conditions and global convergence for nonlinear semidefinite programming
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- Optimality conditions for nonlinear second-order cone programming and symmetric cone programming
- On the use of Jordan algebras for improving global convergence of an augmented Lagrangian method in nonlinear semidefinite programming
- An augmented Lagrangian algorithm for nonlinear semidefinite programming applied to the covering problem
- On the fulfillment of the complementary approximate Karush-Kuhn-Tucker conditions and algorithmic applications
- A second-order sequential optimality condition associated to the convergence of optimization algorithms
- Augmented Lagrangians quadratic growth and second-order sufficient optimality conditions
- On the approximate solutions of augmented subproblems within sequential methods for nonlinear programming
This page was built for publication: A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1639726)