Modeling learning effects via successive linear programming (Q1824564): Difference between revisions
From MaRDI portal
Changed an Item |
ReferenceBot (talk | contribs) Changed an Item |
||
(2 intermediate revisions by 2 users not shown) | |||
Property / MaRDI profile type | |||
Property / MaRDI profile type: MaRDI publication profile / rank | |||
Normal rank | |||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1016/0377-2217(89)90274-9 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W1974168298 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Successive Linear Programming at Exxon / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Design and Testing of a Generalized Reduced Gradient Code for Nonlinear Programming / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Nonlinear Optimization by Successive Linear Programming / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Product-Mix Models When Learning Effects are Present / rank | |||
Normal rank |
Latest revision as of 10:00, 20 June 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Modeling learning effects via successive linear programming |
scientific article |
Statements
Modeling learning effects via successive linear programming (English)
0 references
1989
0 references
The learning effect is given by a model of the form: \[ \max imize\quad \sum^{n_ 1}_{j=1}[(p_ j-v_ j)x_ j-\sum^{m_ 1}_{i=1}(v_ ia_{ij})x_ j^{(1+b_{ij})}]+\sum^{N}_{j=n_ 1+1}C_ jy_ j \] subject to \[ \sum^{n_ 1}_{j=1}a_{ij}x_ j^{(1+b_{ij})}+\sum^{N}_{j=n_ 1+1}a_{ij}y_ j=r_ i,\quad i=1,...,m_ 1, \] \[ \sum^{n_ 1}_{j=1}a_{ij}x_ j+\sum^{N}_{j=n_ 1+1}a_{ij}y_ j=r_ i,\quad i=m_ 1+1,...,m, \] \[ \ell^ 1_ j\leq x_ j\leq u^ 1_ j,\quad \ell^ 2_ j\leq y_ j\leq u^ 2_ j\quad for\quad all\quad j. \] The problem is solved by successive linear programming approximations. A numerical example is given.
0 references
learning effect
0 references
successive linear programming approximations
0 references