Pages that link to "Item:Q2521737"
From MaRDI portal
The following pages link to Optimal control of Markov processes with incomplete state information (Q2521737):
Displaying 50 items.
- Planning and acting in partially observable stochastic domains (Q72343) (← links)
- Control: a perspective (Q463779) (← links)
- Uniform Fatou's lemma (Q530342) (← links)
- Computation of approximate optimal policies in a partially observed inventory model with rain checks (Q642900) (← links)
- Bottom-up learning of hierarchical models in a class of deterministic pomdp environments (Q747543) (← links)
- Application of Jensen's inequality to adaptive suboptimal design (Q754774) (← links)
- A survey of solution techniques for the partially observed Markov decision process (Q804478) (← links)
- On the undecidability of probabilistic planning and related stochastic optimization problems (Q814465) (← links)
- Active inference on discrete state-spaces: a synthesis (Q826935) (← links)
- Affect control processes: intelligent affective interaction using a partially observable Markov decision process (Q901039) (← links)
- Lumpability in compartmental models (Q922423) (← links)
- Partially observable Markov decision processes with imprecise parameters (Q1028935) (← links)
- A tutorial on partially observable Markov decision processes (Q1042307) (← links)
- Transformation of partially observable Markov decision processes into piecewise linear ones (Q1055694) (← links)
- Policy structure for discrete time Markov chain disorder problems (Q1077336) (← links)
- A unified model of qualitative belief change: a dynamical systems perspective (Q1128491) (← links)
- Monotone control laws for noisy, countable-state Markov chains (Q1145071) (← links)
- Application of two inequality results for concave functions to a stochastic optimization problem (Q1232388) (← links)
- Finite-state, discrete-time optimization with randomly varying observation quality (Q1233421) (← links)
- On the Bellman principle for decision problems with random decision policies (Q1236073) (← links)
- Separation of estimation and control for decentralized stochastic control systems (Q1251210) (← links)
- Analysis of an identification algorithm arising in the adaptive estimation of Markov chains (Q1262282) (← links)
- Monitoring machine operations using on-line sensors (Q1278522) (← links)
- Optimal cost and policy for a Markovian replacement problem (Q1321099) (← links)
- Recursive estimation of a discrete-time Markov chain (Q1324260) (← links)
- How to count and guess well: Discrete adaptive filters (Q1330925) (← links)
- Remarks on the existence of solutions to the average cost optimality equation in Markov decision processes (Q1814435) (← links)
- Non-deterministic weighted automata evaluated over Markov chains (Q2009651) (← links)
- On infinite horizon active fault diagnosis for a class of non-linear non-Gaussian systems (Q2018404) (← links)
- Knowledge-based programs as succinct policies for partially observable domains (Q2046009) (← links)
- Partially observable environment estimation with uplift inference for reinforcement learning based recommendation (Q2071406) (← links)
- Optimizing active surveillance for prostate cancer using partially observable Markov decision processes (Q2083968) (← links)
- Rollout approach to sensor scheduling for remote state estimation under integrity attack (Q2165967) (← links)
- A survey of decision making and optimization under uncertainty (Q2241216) (← links)
- State estimation for partially observed Markov chains (Q2264204) (← links)
- Monotonicity properties for two-action partially observable Markov decision processes on partially ordered spaces (Q2286880) (← links)
- Stratified breast cancer follow-up using a continuous state partially observable Markov decision process (Q2333025) (← links)
- On Markov chains induced by partitioned transition probability matrices (Q2430308) (← links)
- Partially observable Markov decision model for the treatment of early prostate cancer (Q2430562) (← links)
- State observation accuracy and finite-memory policy performance (Q2450694) (← links)
- Optimal stochastic control (Q2529522) (← links)
- Optimal control of Markov processes with incomplete state-information. II: The convexity of the loss-function (Q2532030) (← links)
- An adaptive automaton controller for discrete-time Markov processes (Q2544884) (← links)
- Problems of identification and control (Q2546485) (← links)
- A survey of algorithmic methods for partially observed Markov decision processes (Q2638960) (← links)
- On the average cost optimality equation and the structure of optimal policies for partially observable Markov decision processes (Q2638968) (← links)
- On the computation of the optimal cost function for discrete time Markov models with partial observations (Q2638970) (← links)
- Optimal management of stochastic invasion in a metapopulation with Allee effects (Q2676044) (← links)
- Optimal sensor scheduling for hidden Markov model state estimation (Q3151501) (← links)
- PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES AND PERIODIC POLICIES WITH APPLICATIONS (Q3165699) (← links)