Handling infeasibility in a large-scale nonlinear optimization algorithm (Q430999): Difference between revisions
From MaRDI portal
Created a new Item |
Changed an Item |
||
Property / author | |||
Property / author: José Mario Martínez / rank | |||
Normal rank | |||
Property / review text | |||
In constrained optimization, one aims to find the lowest possible value of an objective function within a given domain. Practical nonlinear programming algorithms may converge to infeasible points, even when feasible points exist. Therefore, optimization users that wish to find feasible and optimal solutions of practical problems usually change the initial approximation and/or the algorithmic parameters of the algorithm when an almost infeasible point is found. The effectiveness of this trial-and-error process is, in part, related with the ability of the algorithm of stopping quickly when the generated sequence is fadded to converge to an infeasible point. It is sensible to detect this situation as quickly as possible, in order to have time to change initial approximations and parameters, with the aim of obtaining convergence to acceptable solutions in further runs. In this paper, a recently introduced augmented Lagrangian algorithm is modified in such a way that the probability of quick detection of asymptotic infeasibility is enhanced. The modified algorithm preserves the property of convergence to stationary points of the sum of squares of infeasibilities without harming the convergence to Karush-Kuhn-Tucker points in feasible cases. | |||
Property / review text: In constrained optimization, one aims to find the lowest possible value of an objective function within a given domain. Practical nonlinear programming algorithms may converge to infeasible points, even when feasible points exist. Therefore, optimization users that wish to find feasible and optimal solutions of practical problems usually change the initial approximation and/or the algorithmic parameters of the algorithm when an almost infeasible point is found. The effectiveness of this trial-and-error process is, in part, related with the ability of the algorithm of stopping quickly when the generated sequence is fadded to converge to an infeasible point. It is sensible to detect this situation as quickly as possible, in order to have time to change initial approximations and parameters, with the aim of obtaining convergence to acceptable solutions in further runs. In this paper, a recently introduced augmented Lagrangian algorithm is modified in such a way that the probability of quick detection of asymptotic infeasibility is enhanced. The modified algorithm preserves the property of convergence to stationary points of the sum of squares of infeasibilities without harming the convergence to Karush-Kuhn-Tucker points in feasible cases. / rank | |||
Normal rank | |||
Property / reviewed by | |||
Property / reviewed by: Nada I. Djuranović-Miličić / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 65K05 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 90C30 / rank | |||
Normal rank | |||
Property / zbMATH DE Number | |||
Property / zbMATH DE Number: 6050438 / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
augmented Lagrangian algorithm | |||
Property / zbMATH Keywords: augmented Lagrangian algorithm / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
nonlinear programming | |||
Property / zbMATH Keywords: nonlinear programming / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
numerical experiments | |||
Property / zbMATH Keywords: numerical experiments / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
constrained optimization | |||
Property / zbMATH Keywords: constrained optimization / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
infeasible points | |||
Property / zbMATH Keywords: infeasible points / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
trial-and-error process | |||
Property / zbMATH Keywords: trial-and-error process / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
convergence | |||
Property / zbMATH Keywords: convergence / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
Karush-Kuhn-Tucker points | |||
Property / zbMATH Keywords: Karush-Kuhn-Tucker points / rank | |||
Normal rank |
Revision as of 23:52, 29 June 2023
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Handling infeasibility in a large-scale nonlinear optimization algorithm |
scientific article |
Statements
Handling infeasibility in a large-scale nonlinear optimization algorithm (English)
0 references
26 June 2012
0 references
In constrained optimization, one aims to find the lowest possible value of an objective function within a given domain. Practical nonlinear programming algorithms may converge to infeasible points, even when feasible points exist. Therefore, optimization users that wish to find feasible and optimal solutions of practical problems usually change the initial approximation and/or the algorithmic parameters of the algorithm when an almost infeasible point is found. The effectiveness of this trial-and-error process is, in part, related with the ability of the algorithm of stopping quickly when the generated sequence is fadded to converge to an infeasible point. It is sensible to detect this situation as quickly as possible, in order to have time to change initial approximations and parameters, with the aim of obtaining convergence to acceptable solutions in further runs. In this paper, a recently introduced augmented Lagrangian algorithm is modified in such a way that the probability of quick detection of asymptotic infeasibility is enhanced. The modified algorithm preserves the property of convergence to stationary points of the sum of squares of infeasibilities without harming the convergence to Karush-Kuhn-Tucker points in feasible cases.
0 references
augmented Lagrangian algorithm
0 references
nonlinear programming
0 references
numerical experiments
0 references
constrained optimization
0 references
infeasible points
0 references
trial-and-error process
0 references
convergence
0 references
Karush-Kuhn-Tucker points
0 references