An approximation method for stochastic control problems with partial observation of the state - a method for constructing \(\in\)-optimal controls (Q1108258)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | An approximation method for stochastic control problems with partial observation of the state - a method for constructing \(\in\)-optimal controls |
scientific article |
Statements
An approximation method for stochastic control problems with partial observation of the state - a method for constructing \(\in\)-optimal controls (English)
0 references
1987
0 references
The authors consider a continuous-time stochastic control problem with partial observations. The dynamics of the controlled system is described by an Ito differential equation, the observation process by another stochastic differential equation. The cost of the controls and the resulting motion of the system for a fixed time interval are measured by an expected value functional containing the integral over a running cost function and a terminal cost part. Approximating the controls and functions in the system's stochastic differential equations by step functions and discretizing the observations, the authors prove that under rather mild assumptions the infinite-dimensional problem considered can successively be approximated by a discrete-time stochastic control problem with complete state observations and with a finite number of finitely valued possible states and controls. Thus, dynamic programming techniques may be applied to the approximating problem for computing optimal controls which are shown to be optimal within \(\epsilon\)-limits for the original problem as well.
0 references
random measure transformation
0 references
stochastic approximation
0 references
continuous-time stochastic control problem with partial observations
0 references
dynamic programming techniques
0 references