The following pages link to (Q4256521):
Displaying 48 items.
- Solving average cost Markov decision processes by means of a two-phase time aggregation algorithm (Q300040) (← links)
- Optimal speech motor control and token-to-token variability: a Bayesian modeling approach (Q310158) (← links)
- Design and evaluation of norm-aware agents based on normative Markov decision processes (Q324660) (← links)
- Knows what it knows: a framework for self-aware learning (Q413843) (← links)
- Ranking policies in discrete Markov decision processes (Q622592) (← links)
- Detecting and repairing anomalous evolutions in noisy environments. Logic programming formalization and complexity results (Q645074) (← links)
- Efficient solutions to factored MDPs with imprecise transition probabilities (Q646498) (← links)
- Computing rank dependent utility in graphical models for sequential decision problems (Q646548) (← links)
- Using mathematical programming to solve factored Markov decision processes with imprecise probabilities (Q648368) (← links)
- Compact and efficient encodings for planning in factored state and action spaces with learned binarized neural network transition models (Q785238) (← links)
- On the undecidability of probabilistic planning and related stochastic optimization problems (Q814465) (← links)
- Weak, strong, and strong cyclic planning via symbolic model checking (Q814470) (← links)
- Contingent planning under uncertainty via stochastic satisfiability (Q814473) (← links)
- Equivalence notions and model minimization in Markov decision processes (Q814474) (← links)
- Solving factored MDPs using non-homogeneous partitions (Q814475) (← links)
- Anytime heuristic search for partial satisfaction planning (Q835822) (← links)
- Task decomposition on abstract states, for planning under nondeterminism (Q835827) (← links)
- Practical solution techniques for first-order MDPs (Q835833) (← links)
- Quantum physical symbol systems (Q853791) (← links)
- Affect control processes: intelligent affective interaction using a partially observable Markov decision process (Q901039) (← links)
- Real-time dynamic programming for Markov decision processes with imprecise probabilities (Q901046) (← links)
- Partially observable Markov decision processes with imprecise parameters (Q1028935) (← links)
- Abstraction and approximate decision-theoretic planning. (Q1399130) (← links)
- Stochastic dynamic programming with factored representations (Q1583230) (← links)
- Bounded-parameter Markov decision processes (Q1583513) (← links)
- Open problems in universal induction \& intelligence (Q1662486) (← links)
- Reasoning about discrete and continuous noisy sensors and effectors in dynamical systems (Q1711885) (← links)
- Risk-sensitive multiagent decision-theoretic planning based on MDP and one-switch utility functions (Q1718973) (← links)
- How to decide what to do? (Q1885765) (← links)
- Adaptive-resolution reinforcement learning with polynomial exploration in deterministic domains (Q1959632) (← links)
- Causal learning with Occam's razor (Q2009770) (← links)
- Computer science and decision theory (Q2271874) (← links)
- Recursively modeling other agents for decision making: a research perspective (Q2287201) (← links)
- Scheduling with timed automata (Q2368955) (← links)
- Strong planning under uncertainty in domains with numerous but identical elements (a generic approach) (Q2373707) (← links)
- Graphical models for imprecise probabilities (Q2386115) (← links)
- Complexity results and algorithms for possibilistic influence diagrams (Q2389644) (← links)
- Efficient incremental planning and learning with multi-valued decision diagrams (Q2407480) (← links)
- Strong planning under partial observability (Q2457630) (← links)
- Quantitative controller synthesis for consumption Markov decision processes (Q2680240) (← links)
- Solving an Infinite-Horizon Discounted Markov Decision Process by DC Programming and DCA (Q2955913) (← links)
- Bridging the Gap between Reinforcement Learning and Knowledge Representation: A Logical Off- and On-Policy Framework (Q3011967) (← links)
- Influence of modeling structure in probabilistic sequential decision problems (Q3411302) (← links)
- What you should know about approximate dynamic programming (Q3621932) (← links)
- Probabilistic Reasoning by SAT Solvers (Q3638188) (← links)
- Computational Benefits of Intermediate Rewards for Goal-Reaching Policy Learning (Q5076329) (← links)
- Using Machine Learning for Decreasing State Uncertainty in Planning (Q5139593) (← links)
- A Sufficient Statistic for Influence in Structured Multiagent Environments (Q5856481) (← links)