On the convergence properties of a second-order augmented Lagrangian method for nonlinear programming problems with inequality constraints
From MaRDI portal
Publication:831373
DOI10.1007/s10957-015-0842-5zbMath1467.90071OpenAlexW2247931801MaRDI QIDQ831373
Publication date: 11 May 2021
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10957-015-0842-5
nonsmooth analysisnonlinear programminggeneralized Newton methodsecond-order augmented Lagrangian method
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Nonsmooth analysis (49J52)
Related Items
The Linear and Asymptotically Superlinear Convergence Rates of the Augmented Lagrangian Method with a Practical Relative Error Criterion ⋮ Unified convergence analysis of a second-order method of multipliers for nonlinear conic programming
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The augmented Lagrangian method for equality and inequality constraints in Hilbert spaces
- The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programming
- Multiplier methods: A survey
- On the convergence properties of second-order multiplier methods
- Lipschitzian inverse functions, directional derivatives, and applications in \(C^{1,1}\) optimization
- Extended convergence results for the method of multipliers for nonstrictly binding inequality constraints
- A nonsmooth version of Newton's method
- Analysis on a superlinearly convergent augmented Lagrangian method
- Multiplier and gradient methods
- The multiplier method of Hestenes and Powell applied to convex programming
- A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds
- Constraint Nondegeneracy, Strong Regularity, and Nonsingularity in Semidefinite Programming
- Optimization and nonsmooth analysis
- A Convergence Theory for a Class of Quasi-Newton Methods for Constrained Optimization
- Strongly Regular Generalized Equations
- On Penalty and Multiplier Methods for Constrained Minimization
- An Ideal Penalty Function for Constrained Optimization
- Semismooth and Semiconvex Functions in Constrained Optimization
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- Newton and Quasi-Newton Methods for a Class of Nonsmooth Equations and Related Problems
- Variational Analysis
- Inverse and implicit function theorems forH-differentiable and semismooth functions
- Convergence Analysis of Some Algorithms for Solving Nonsmooth Equations
- A dual approach to solving nonlinear programming problems by unconstrained optimization
- A further result on an implicit function theorem for locally Lipschitz functions