Nonstationary discrete-time deterministic and stochastic control systems: bounded and unbounded cases
From MaRDI portal
Publication:553376
Recommendations
- Nonstationary discrete-time deterministic and stochastic control systems with infinite horizon
- Nonstationary continuous-time Markov control processes with discounted costs on infinite horizon
- Limiting average cost control problems in a class of discrete-time stochastic systems
- First passage problems for nonstationary discrete-time stochastic control systems
- Stationary policies for lower bounds on the minimum average cost of discrete-time nonlinear control systems
Cites work
- scientific article; zbMATH DE number 1325008 (Why is no real title available?)
- scientific article; zbMATH DE number 2154249 (Why is no real title available?)
- Adaptive Markov control processes
- An existence theorem for discrete-time infinite-horizon optimal control problems
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- Control aspects of linear discrete time-varying systems
- Dynamic programming in economics.
- Limiting average criteria for nonstationary Markov decision processes
- Linear systems.
- Measurable selection theorems for optimization problems
- Nonstationary continuous-time Markov control processes with discounted costs on infinite horizon
- Nonstationary discrete-time deterministic and stochastic control systems with infinite horizon
- Nonzero-sum non-stationary discounted Markov game model
- On overtaking optimal tracking for linear systems
- Optimal Plans for Dynamic Programming Problems
- Optimal infinite-horizon feedback laws for a general class of constrained discrete-time systems: Stability and moving-horizon approximations
- Optimality criteria for deterministic discrete-time infinite horizon optimization
- Remarks on the Control of Discrete-Time Distributed Parameter Systems
- Solution approximation in infinite horizon linear quadratic control
- Turnpike properties in the calculus of variations and optimal control
Cited in
(12)- Necessity of the terminal condition in the infinite horizon dynamic optimization problems with unbounded payoff
- Nonstationary discrete-time deterministic and stochastic control systems with infinite horizon
- scientific article; zbMATH DE number 4064895 (Why is no real title available?)
- On the Euler equation approach to discrete-time nonstationary optimal control problems
- Markov decision processes with iterated coherent risk measures
- Robust and nonlinear control literature survey (No. 26)
- Discrete-time hybrid control processes with unbounded costs
- Stationary policies for lower bounds on the minimum average cost of discrete-time nonlinear control systems
- Expected value based optimal control for discrete-time stochastic noncausal systems
- First passage problems for nonstationary discrete-time stochastic control systems
- An average-value-at-risk criterion for Markov decision processes with unbounded costs
- Nonstationary Markov decision processes with risk probability criteria
This page was built for publication: Nonstationary discrete-time deterministic and stochastic control systems: bounded and unbounded cases
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q553376)