Successive linearization methods for large-scale nonlinear programming problems (Q1185122)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Successive linearization methods for large-scale nonlinear programming problems |
scientific article |
Statements
Successive linearization methods for large-scale nonlinear programming problems (English)
0 references
28 June 1992
0 references
Consider the nonlinear programming problem (1) \(\min f(x)\), \(g_ i(x)=0\) for \(i=1,\dots,\bar m\), \(g_ i(x)\geq 0\) for \(i=\bar m+1,\dots,m\), where \(f: R^ n\to R\) and \(g_ i: R^ n\to R\), are continuously differentiable. To solve this large-scale problem the authors introduce an algorithm (Successive Linear Method, SLM) whose main feature is that of preserving the sparsity the original problem may have. The main points on which SLM is based are (i) Problem (1) is replaced by an \(L_ 1\)-exact penalty function problem, i.e. (2) \(\min p(x)=\sum^{\bar m}_{i=1} r_ i| g_ i(x)|+\sum^ m_{i=\bar m+1} r_ i\{g_ i(x)\}_ +\); (ii) the solution of (2) is found iteratively by finding at each iteration an appropriate solution \(x+d\) of a first-order approximation at \(x\) of \(p(x)\) augmented by the term \((\lambda/2)d^ T Cd\) with \(\lambda\) a positive parameter and \(C\) a symmetric positive definite matrix. In case of \(C\) diagonal, the solution \(c+d\) is calculated by the successive overrelaxation algorithm (SOR) or the conjugate gradient algorithm that preserve the sparsity present in (1). The authors describe a practical implementation of SLM and report some numerical results on test problems. An algorithm related to SLM has been proposed by \textit{J. Zhang}, \textit{N.-H. Kim} and \textit{L. Lasdon} [Manage. Sci. 31, 1312-1331 (1985; Zbl 0601.90128)] but it does not completely preserves the sparsity in (1).
0 references
successive linear method
0 references
\(L_ 1\)-exact penalty function
0 references
0 references