The following pages link to Adaptive Markov control processes (Q1801301):
Displaying 50 items.
- Long run risk sensitive portfolio with general factors (Q283999) (← links)
- Computable approximations for continuous-time Markov decision processes on Borel spaces based on empirical measures (Q302091) (← links)
- Discrete-time control for systems of interacting objects with unknown random disturbance distributions: a mean field approach (Q315779) (← links)
- A perturbation approach to a class of discounted approximate value iteration algorithms with Borel spaces (Q330284) (← links)
- Characterization and computation of infinite-horizon specifications over Markov processes (Q386604) (← links)
- First passage problems for nonstationary discrete-time stochastic control systems (Q389826) (← links)
- Stationary Markov perfect equilibria in risk sensitive stochastic overlapping generations models (Q402090) (← links)
- A bi-level approach for the design of event-triggered control systems over a shared network (Q461467) (← links)
- Convergence of probability measures and Markov decision models with incomplete information (Q492169) (← links)
- Quantitative model-checking of controlled discrete-time Markov processes (Q515573) (← links)
- Controlled Markov decision processes with AVaR criteria for unbounded costs (Q515747) (← links)
- Nonstationary discrete-time deterministic and stochastic control systems: bounded and unbounded cases (Q553376) (← links)
- Approximation of Markov decision processes with general state space (Q663675) (← links)
- A note on the vanishing interest rate approach in average Markov decision chains with continuous and bounded costs (Q673449) (← links)
- Control of a random walk with noisy delayed information (Q673894) (← links)
- Optimal software testing in the setting of controlled Markov chains (Q706943) (← links)
- Robust optimal control using conditional risk mappings in infinite horizon (Q724507) (← links)
- Value iteration in countable state average cost Markov decision processes with unbounded costs (Q806687) (← links)
- On the optimality equation for average cost Markov control processes with Feller transition probabilities (Q819727) (← links)
- On ordinal comparison of policies in Markov reward processes (Q852153) (← links)
- Approximation of noncooperative semi-Markov games (Q868575) (← links)
- A semimartingale characterization of average optimal stationary policies for Markov decision processes (Q871336) (← links)
- Simulation-based optimal sensor scheduling with application to observer trajectory planning (Q883376) (← links)
- Markov control models with unknown random state-action-dependent discount factors (Q889107) (← links)
- Near optimality of quantized policies in stochastic control under weak continuity conditions (Q892326) (← links)
- Partially observed semi-Markov zero-sum games with average payoff (Q930973) (← links)
- Inventory management with partially observed nonstationary demand (Q993707) (← links)
- Robustness inequality for Markov control processes with unbounded costs (Q1128542) (← links)
- A pause control approach to the value iteration scheme in average Markov decision processes (Q1128694) (← links)
- A note on the convergence rate of the value iteration scheme in controlled Markov chains (Q1128695) (← links)
- Adaptive control of constrained Markov chains: Criteria and policies (Q1174698) (← links)
- Average cost Markov decision processes: Optimality conditions (Q1176301) (← links)
- A counterexample on the optimality equation in Markov decision chains with the average cost criterion (Q1176601) (← links)
- Average optimality in dynamic programming on Borel spaces -- unbounded costs and controls (Q1190402) (← links)
- Computationally efficient algorithms for on-line optimization of Markov decision processes (Q1190506) (← links)
- Equivalence of Lyapunov stability criteria in a class of Markov decision processes (Q1194209) (← links)
- On strong average optimality of Markov decision processes with unbounded costs (Q1197886) (← links)
- Minimizing risk models in Markov decision processes with policies depending on target values (Q1282996) (← links)
- Weak conditions for average optimality in Markov control processes (Q1324511) (← links)
- A note on the Ross-Taylor theorem (Q1339776) (← links)
- Risk sensitive control of Markov processes in countable state space (Q1350178) (← links)
- Approximation of average cost optimal policies for general Markov decision processes with unbounded costs (Q1362682) (← links)
- On confidence intervals from simulation of finite Markov chains (Q1374692) (← links)
- Approximate receding horizon approach for Markov decision processes: average reward case (Q1414220) (← links)
- Infinite horizon risk sensitive control of discrete time Markov processes with small risk (Q1575293) (← links)
- Limiting optimal discounted-cost control of a class of time-varying stochastic systems (Q1575297) (← links)
- Sample complexity for Markov chain self-tuner (Q1583211) (← links)
- Partially observed optimal stopping problem for discrete-time Markov processes (Q1680763) (← links)
- Discrete-time hybrid control in Borel spaces: average cost optimality criterion (Q1746693) (← links)
- Solutions of the average cost optimality equation for Markov decision processes with weakly continuous kernel: the fixed-point approach revisited (Q1748297) (← links)