Equality and inequality constrained optimization algorithms with convergent stepsizes
From MaRDI portal
Publication:1321310
DOI10.1007/BF00939376zbMath0797.90097OpenAlexW2006750587MaRDI QIDQ1321310
Publication date: 27 October 1994
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/bf00939376
convergence analysisglobal convergenceconstrained optimizationsequential quadratic programmingaugmented Lagrangiansnonlinear equality and inequality constraintsadaptive penalty parameter
Nonlinear programming (90C30) Computational methods for problems pertaining to operations research and mathematical programming (90-08)
Related Items (7)
A parallel algorithm for constrained optimization problems ⋮ A quadratically convergent scaling newton’s method for nonlinear complementarity problems ⋮ Switching stepsize strategies for sequential quadratic programming ⋮ Flattened aggregate function method for nonlinear programming with many complicated constraints ⋮ A constrained min-max algorithm for rival models of the same economic system ⋮ An interior-point algorithm for nonlinear minimax problems ⋮ Globally convergent interior-point algorithm for nonlinear programming
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The nonlinear programming method of Wilson, Han, and Powell with an augmented Lagrangian type line search function. I. Convergence analysis
- The nonlinear programming method of Wilson, Han, and Powell with an augmented Lagrangian type line search function. II. An efficient implementation with linear least squares subproblems
- A class of superlinearly convergent projection algorithms with relaxed stepsizes
- Convergent stepsizes for constrained optimization algorithms
- Experiments with successive quadratic programming algorithms
- A robust secant method for optimization problems with inequality constraints
- A globally convergent, implementable multiplier method with automatic penalty limitation
- Test examples for nonlinear programming codes
- A globally convergent method for nonlinear programming
- Diagonalized multiplier methods and quasi-Newton methods for constrained optimization
- A robust sequential quadratic programming method
- On the convergence of a sequential quadratic programming method with an augmented lagrangian line search function
- On the Local Convergence of a Quasi-Newton Method for the Nonlinear Programming Problem
- A recursive quadratic programming algorithm that uses differentiable exact penalty functions
- A Convergence Theory for a Class of Quasi-Newton Methods for Constrained Optimization
- Recursive quadratic programming methods based on the augmented lagrangian
- A Globally Convergent Version of REQP for Constrained Minimization
- On Secant Updates for Use in General Constrained Optimization
- Reduced quasi-Newton methods with feasibility improvement for nonlinearly constrained optimization
- A surperlinearly convergent algorithm for constrained optimization problems
- The watchdog technique for forcing convergence in algorithms for constrained optimization
- On the Local Convergence of Quasi-Newton Methods for Constrained Optimization
- Nonlinear programming via an exact penalty function: Asymptotic analysis
- Superlinearly convergent quasi-newton algorithms for nonlinearly constrained optimization problems
- Superlinearly convergent variable metric algorithms for general nonlinear programming problems
- On the Convergence of Some Constrained Minimization Algorithms Based on Recursive Quadratic Programming
- Some examples of cycling in variable metric methods for constrained minimization
- Exact Penalty Functions in Constrained Optimization
This page was built for publication: Equality and inequality constrained optimization algorithms with convergent stepsizes