On the method of multipliers for mathematical programming problems (Q2552449): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Changed an Item
Set OpenAlex properties.
 
(2 intermediate revisions by 2 users not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sequential gradient-restoration algorithm for the minimization of constrained functions. Ordinary and conjugate gradient versions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5572711 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5612814 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5588268 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5641025 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5651820 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Constrained optimization problems using multiplier methods / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/bf00934960 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2030409788 / rank
 
Normal rank

Latest revision as of 09:54, 30 July 2024

scientific article
Language Label Description Also known as
English
On the method of multipliers for mathematical programming problems
scientific article

    Statements

    On the method of multipliers for mathematical programming problems (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    1972
    0 references
    In this paper, the numerical solution of the basic problem of mathematical programming is considered. This is the problem of minimizing a function \(f(x)\) subject to a constraint \(\varphi(x) = 0\). Here, \(f\) is a scalar, \(x\) an \(n\)-vector, and \(\varphi\) a \(q\)-vector, with \(q<n\). The approach employed is based on the introduction of a special function which allows one to view the vector \(x\) as unconstrained. Specifically, the function \(f(x)\) is replaced by the augmented penalty function \(W(x,\lambda,k) = f(x) + \lambda^T \varphi(x) + \varphi^T\varphi(x)\). Here, the \(q\)-vector \(\lambda\) is an approximation to the Lagrange multiplier and the scalar \(k>0\) is the penalty constant. Previously, the augmented penalty function \(W(x,\lambda,k)\) was used by Hestenes in his method of multipliers. In Hestenes' version, the method of multipliers involves cycles, in each of which the multiplier and the penalty constant are held constant. After the minimum of the augmented penalty function is achieved in any given cycle, the multiplier \(\lambda\) is updated, while the penalty constant \(k\) is held unchanged. In this paper, two modifications of the method of multipliers are presented in order to improve its convergence characteristics. The improved convergence is achieved by (i) increasing the updating frequency so that the number of iterations in a cycle is \(\Delta N=1\) for the ordinary-gradient algorithm and the modified-quasilinearization algorithm and \(\Delta N=n\) for the conju
    0 references

    Identifiers