Perturbed Kuhn-Tucker points and rates of convergence for a class of nonlinear-programming algorithms
From MaRDI portal
Publication:4047415
DOI10.1007/BF01585500zbMATH Open0294.90078MaRDI QIDQ4047415FDOQ4047415
Authors: Stephen M. Robinson
Publication date: 1974
Published in: Mathematical Programming (Search for Journal in Brave)
Numerical mathematical programming methods (65K05) Convex programming (90C25) Nonlinear programming (90C30) Sensitivity, stability, well-posedness (49K40)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Sensitivity analysis for nonlinear programming using penalty methods
- Title not available (Why is that?)
- The Validity of a Family of Optimization Methods
- Iterative Solution of Nonlinear Optimal Control Problems
- A sufficient condition for continuity of optimal sets in mathematical programming
- On the continuity of the minimum set of a continuous function
- A quadratically-convergent algorithm for general nonlinear programming problems
- Stability in Nonlinear Programming
- Computational experience in sensitivity analysis for nonlinear programming
- Title not available (Why is that?)
- Penalty function versus non-penalty function methods for constrained nonlinear programming problems
- Recursive Decision Systems: An Existence Analysis
Cited In (76)
- Second order sensitivity analysis and asymptotic theory of parametrized nonlinear programs
- A projected Newton method in a Cartesian product of balls
- A parallel inexact Newton method for stochastic programs with recourse
- Applications of the method of partial inverses to convex programming: Decomposition
- Estimates for Kuhn-Tucker points of perturbed convex programs
- A robust sequential quadratic programming method
- Lipschitz properties of solutions in mathematical programming
- Global and local convergence of a class of penalty-free-type methods for nonlinear programming
- Superlinearly convergent algorithm for min-max problems
- Degeneracy in NLP and the development of results motivated by its presence
- A recursive quadratic programming algorithm for semi-infinite optimization problems
- Global convergence of an SQP method without boundedness assumptions on any of the iterative sequences
- Computable bounds on parametric solutions of convex problems
- Global and local convergence of a filter line search method for nonlinear programming
- A globally and superlinearly convergent feasible QP-free method for nonlinear programming
- A superlinearly convergent method of feasible directions.
- A globally convergent method for nonlinear programming
- A superlinearly convergent numerical algorithm for nonlinear programming
- A second-order method for the general nonlinear programming problem
- The Lagrange-Newton method for state constrained optimal control problems
- A globally convergent algorithm for nonlinearly constrained optimization problems
- A new SQP method of feasible directions for nonlinear programming.
- Max-min resource allocation
- Superlinearly convergent quasi-newton algorithms for nonlinearly constrained optimization problems
- A sparsity preserving convexification procedure for indefinite quadratic programs arising in direct optimal control
- On combining feasibility, descent and superlinear convergence in inequality constrained optimization
- Superlinearly convergent variable metric algorithms for general nonlinear programming problems
- Stochastic programming with incomplete information:a surrey of results on postoptimization and sensitivity analysis
- Enlarging the region of convergence of Newton's method for constrained optimization
- Sensitivity and stability analysis for nonlinear programming
- Quadratically and superlinearly convergent algorithms for the solution of inequality constrained minimization problems
- A norm-relaxed method of feasible directions for finely discretized problems from semi-infinite programming
- Inexact Newton-type optimization with iterated sensitivities
- Stability and sensitivity-analysis for stochastic programming
- An SQP method for optimal control of weakly singular Hammerstein integral equations
- Superlinearly convergent approximate Newton methods for LC\(^ 1\) optimization problems
- Newton's method and its use in optimization
- Inexact Newton methods for the nonlinear complementarity problem
- Exact penalty function algorithm with simple updating of the penalty parameter
- On the differentiability of optimal values for bounded nonlinear programs with equality constraints
- Sensitivity analysis in economics
- Comments on: Critical Lagrange multipliers: what we currently know about them, how they spoil our lives, and what we can do about it
- Rejoinder on: Critical Lagrange multipliers: what we currently know about them, how they spoil our lives, and what we can do about it
- Sequential quadratic programming methods for parametric nonlinear optimization
- Parametric sensitivity analysis of perturbed PDE optimal control problems with state and control constraints
- A projected gradient and constraint linearization method for nonlinear model predictive control
- Generalized implicit function theorem and its application to parametric optimal control problems
- A robust secant method for optimization problems with inequality constraints
- Some inverse mapping theorems
- Newton-type methods: a broader view
- Inexact Josephy-Newton framework for generalized equations and its applications to local analysis of Newtonian methods for constrained optimization
- A sequential quadratic programming method for potentially infeasible mathematical programs
- Sequential quadratic programming algorithm for discrete optimal control problems with control inequality constraints
- Substitution secant/finite difference method to large sparse minimax problems
- Rates of convergence for adaptive Newton methods
- An iterative working-set method for large-scale nonconvex quadratic programming
- Newton's method for a class of nonsmooth functions
- The linearization method
- Strong metric (sub)regularity of Karush-Kuhn-Tucker mappings for piecewise linear-quadratic convex-composite optimization and the quadratic convergence of Newton's method
- The lagrange-newton method for infinite-dimensional optimization problems
- On the global and superlinear convergence of a discretized version of Wilson's method
- Newton's method for singular constrained optimization problems
- A special newton-type optimization method
- Convergence of algorithms for perturbed optimization problems
- The Lagrange-Newton method for nonlinear optimal control problems
- Sensitivity analysis for nonlinear programming using penalty methods
- Solving Stackelberg equilibrium for multi objective aerodynamic shape optimization
- Perturbation analysis of nonlinear semidefinite programming under Jacobian uniqueness conditions
- Primal superlinear convergence of SQP methods in piecewise linear-quadratic composite optimization
- Exponential decay of sensitivity in graph-structured nonlinear programs
- Fixed-time control under spatiotemporal and input constraints: a quadratic programming based approach
- Convergence analysis of the semismooth Newton method for sparse control problems governed by semilinear elliptic equations
- A globally convergent version of a general recursive algorithm for nonlinear programming
- Direct optimal control and model predictive control
- Superlinear convergence of the sequential quadratic method in constrained optimization
- A competitive inexact nonmonotone filter SQP method: convergence analysis and numerical results
This page was built for publication: Perturbed Kuhn-Tucker points and rates of convergence for a class of nonlinear-programming algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4047415)