Decomposition coordination in deterministic and stochastic optimization (Q2012653)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Decomposition coordination in deterministic and stochastic optimization |
scientific article |
Statements
Decomposition coordination in deterministic and stochastic optimization (English)
0 references
2 August 2017
0 references
This volume contains two main parts dealing with the basic methods in mathematical programming on the one hand and stochastic optimization methods on the other hand. In the first part fundamental methods for approximate solution of unconstrained and constrained optimization problems are presented. Here, gradient, subgradient, Newton methods, procedures for solving separable optimization problems and discrete-time decision processes are treated. Furthermore, several regularization, relaxation techniques are described. For solving constrained optimization problems, projection and augmented Lagrange procedures are reviewed. In addition, corresponding convergence properties are presented. Optimization problems under stochastic uncertainty, considered in the second part, depend on several technical parameters, which are not given, fixed quantities, but have to be modeled by random variables with a certain joint probability distribution. Due to the parameter uncertainty, problems of this type cannot be solved directly. Using decision theoretical principles, they are replaced by deterministic substitute problems. Main substitutes involve the expected objective function and/or the probability(ies) that the given constraints are fulfilled. Due to the large complexity of expectation and probability functions, the arising substitute problems must be solved by numerical optimization methods. Among others, the following iterative procedures and their convergence properties are discussed: Projected stochastic gradient methods, stochastic gradient techniques based on auxiliary functions. Introducing the Lagrangian and the corresponding necessary optimality conditions, projected stochastic gradient methods are treated for solving constrained stochastic programs. Here, also modifications based on certain auxiliary functions are discussed. Corresponding convergence properties are given. The solution techniques under consideration are illustrated with many solved exercises and concrete applications. The present monograph is suitable for readers having a good knowledge in calculus, linear algebra, stochastics and basic optimization theory.
0 references