Concepts and methods for discrete and continuous time control under uncertainty
From MaRDI portal
Publication:1265914
DOI10.1016/S0167-6687(98)00006-7zbMath0916.93085MaRDI QIDQ1265914
Publication date: 19 July 1999
Published in: Insurance Mathematics \& Economics (Search for Journal in Brave)
dynamic programming; stochastic optimal control; finite horizon; controlled Markov chains; transforms of the payoff function
49L20: Dynamic programming in optimal control and differential games
93E20: Optimal stochastic control
Related Items
The linear-quadratic stochastic optimal control problem with random horizon at the finite number of infinitesimal events, Bayesian optimal control for a non-autonomous stochastic discrete time system, Risk measurement and risk-averse control of partially observable discrete-time Markov systems, OPTIMAL PORTFOLIO CONSTRUCTION UNDER PARTIAL INFORMATION FOR A BALANCED FUND, An investigation of the theory of bank portfolio allocation within a discrete stochastic framework using optimal control techniques
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Probability methods for approximations in stochastic control and for elliptic equations
- Dynamic programming and stochastic control
- Logarithmic transformations for discrete-time, finite-horizon stochastic control problems
- An approximation method for stochastic control problems with partial observation of the state - a method for constructing \(\in\)-optimal controls
- Explicit solutions for multivariate, discrete-time control problems under uncertainty
- Connections between stochastic control and dynamic games
- On dynamic programming for sequential decision problems under a general form of uncertainty
- A mathematical theory of hints. An approach to the Dempster-Shafer theory of evidence
- Numerical aspects of monotone approximations in convex stochastic control problems
- Successive approximation methods for the solution of optimal control problems
- Designing approximation schemes for stochastic optimization problems, in particular for stochastic programs with recourse
- On the construction of nearly optimal strategies for a general problem of control of partially observed diffusions
- Convergence of discretization procedures in dynamic programming
- Approximations of Dynamic Programs, I
- Approximations of Dynamic Programs, II
- An Approach to Discrete-Time Stochastic Control Problems under Partial Observation
- Optimal Continuous-Parameter Stochastic Control