Nonstationary discrete-time deterministic and stochastic control systems: bounded and unbounded cases
DOI10.1016/J.SYSCONLE.2011.04.006zbMATH Open1222.93135OpenAlexW2086355133MaRDI QIDQ553376FDOQ553376
Authors: Xianping Guo, Adrián Hernández-del-Valle, Onésimo Hernández-Lerma
Publication date: 27 July 2011
Published in: Systems \& Control Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.sysconle.2011.04.006
Recommendations
- Nonstationary discrete-time deterministic and stochastic control systems with infinite horizon
- Nonstationary continuous-time Markov control processes with discounted costs on infinite horizon
- Limiting average cost control problems in a class of discrete-time stochastic systems
- First passage problems for nonstationary discrete-time stochastic control systems
- Stationary policies for lower bounds on the minimum average cost of discrete-time nonlinear control systems
nonlinear systemstime-varying systemsdiscrete-time control systemsnonstationary dynamic programmingtime-nonhomogeneous systems
Dynamic programming (90C39) Nonlinear systems in control theory (93C10) Discrete-time control/observation systems (93C55) Optimal stochastic control (93E20)
Cites Work
- Optimal infinite-horizon feedback laws for a general class of constrained discrete-time systems: Stability and moving-horizon approximations
- Turnpike properties in the calculus of variations and optimal control
- Title not available (Why is that?)
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- Adaptive Markov control processes
- Remarks on the Control of Discrete-Time Distributed Parameter Systems
- Linear systems.
- Dynamic programming in economics.
- Optimality criteria for deterministic discrete-time infinite horizon optimization
- Limiting average criteria for nonstationary Markov decision processes
- Nonstationary discrete-time deterministic and stochastic control systems with infinite horizon
- Title not available (Why is that?)
- Optimal Plans for Dynamic Programming Problems
- Measurable selection theorems for optimization problems
- An existence theorem for discrete-time infinite-horizon optimal control problems
- Control aspects of linear discrete time-varying systems
- On overtaking optimal tracking for linear systems
- Nonzero-sum non-stationary discounted Markov game model
- Solution approximation in infinite horizon linear quadratic control
- Nonstationary continuous-time Markov control processes with discounted costs on infinite horizon
Cited In (9)
- Robust and nonlinear control literature survey (No. 26)
- An average-value-at-risk criterion for Markov decision processes with unbounded costs
- Necessity of the terminal condition in the infinite horizon dynamic optimization problems with unbounded payoff
- Nonstationary discrete-time deterministic and stochastic control systems with infinite horizon
- Markov decision processes with iterated coherent risk measures
- Title not available (Why is that?)
- On the Euler equation approach to discrete-time nonstationary optimal control problems
- Title not available (Why is that?)
- First passage problems for nonstationary discrete-time stochastic control systems
This page was built for publication: Nonstationary discrete-time deterministic and stochastic control systems: bounded and unbounded cases
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q553376)