Diagonalized multiplier methods and quasi-Newton methods for constrained optimization
From MaRDI portal
Publication:1230357
DOI10.1007/BF00933161zbMath0336.65034OpenAlexW2022737993MaRDI QIDQ1230357
Publication date: 1977
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/bf00933161
Related Items
Stopping criteria for linesearch methods without derivatives, Two-step and three-step Q-superlinear convergence of SQP methods, Dual techniques for constrained optimization, A heuristic algorithm for nonlinear programming, A projected Newton method for minimization problems with nonlinear inequality constraints, Discriminant analysis and density estimation on the finite d-dimensional grid, Numerical experience with a polyhedral-norm CDT trust-region algorithm, Quadratically and superlinearly convergent algorithms for the solution of inequality constrained minimization problems, Decomposition Methods Based on Augmented Lagrangians: A Survey, An analysis of reduced Hessian methods for constrained optimization, Interior-point methods: An old and new approach to nonlinear programming, Recent developments in constrained optimization, Convergence rate of the augmented Lagrangian SQP method, Local properties of inexact methods for minimizing nonsmooth composite functions, A robust and informative method for solving large-scale power flow problems, A two-stage feasible directions algorithm for nonlinear constrained optimization, Algorithms for a class of nondifferentiable problems, On a Newton-like method for constrained nonlinear minimization via slack variables, Enlarging the region of convergence of Newton's method for constrained optimization, The strict superlinear order can be faster than the infinite order, A primal-dual augmented Lagrangian, Approximate quasi-Newton methods, An efficient algorithm for a class of equality-constrained optimization problems, A globally convergent algorithm based on imbedding and parametric optimization, Augmented Lagrangian homotopy method for the regularization of total variation denoising problems, A family of the local convergence of the improved secant methods for nonlinear equality constrained optimization subject to bounds on variables, Partitioned quasi-Newton methods for nonlinear equality constrained optimization, Efficient alternating minimization methods for variational edge-weighted colorization models, Variants of the reduced Newton method for nonlinear equality constrained optimization problems, Multilevel least-change Newton-like methods for equality constrained optimization problems, Complementary energy approach to contact problems based on consistent augmented Lagrangian formulation, An implementation of Newton-like methods on nonlinearly constrained networks, Nonmonotone filter DQMM method for the system of nonlinear equations, On the convergence properties of second-order multiplier methods, Algorithms for nonlinear constraints that use lagrangian functions, Local convergence of the diagonalized method of multipliers, A geometric method in nonlinear programming, Properties of updating methods for the multipliers in augmented Lagrangians, Perturbation lemma for the Newton method with application to the SQP Newton method, New theoretical results on recursive quadratic programming algorithms, Augmented Lagrangians which are quadratic in the multiplier, On Secant Updates for Use in General Constrained Optimization, A note on the method of multipliers, Convergent stepsizes for constrained optimization algorithms, A primal-dual Newton-type algorithm for geometric programs with equality constraints, On differentiable exact penalty functions, Equality and inequality constrained optimization algorithms with convergent stepsizes, Analysis and implementation of a dual algorithm for constrained optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Local convergence of the diagonalized method of multipliers
- A stable approach to Newton's method for general mathematical programming problems in R\(^n\)
- On the combination of the multiplier method of Hestenes and Powell with Newton's method
- Multiplier and gradient methods
- Use of the augmented penalty function in mathematical programming problems. I
- On the method of multipliers for mathematical programming problems
- The multiplier method of Hestenes and Powell applied to convex programming
- A general approach to Newton's method for Banach space problems with equality constraints
- Newton’s Method for Optimization Problems with Equality Constraints
- An Ideal Penalty Function for Constrained Optimization
- Stability Theory for Systems of Inequalities, Part II: Differentiable Nonlinear Systems
- Quasi-Newton Methods, Motivation and Theory
- On the Local and Superlinear Convergence of Quasi-Newton Methods
- The Weak Newton Method and Boundary Value Problems
- A new method for the optimization of a nonlinear function subject to nonlinear constraints
- A New Algorithm for Unconstrained Optimization