Policy iteration for bounded-parameter POMDPs (Q1955470): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
(One intermediate revision by one other user not shown)
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s00500-012-0932-3 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2069186469 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Bayesian Sequential Detection With Phase-Distributed Change Time and Nonlinear Penalty—A POMDP Lattice Programming Approach / rank
 
Normal rank
Property / cites work
 
Property / cites work: Bounded-parameter Markov decision processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Markovian Decision Processes with Uncertain Transition Probabilities / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3624036 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Partially observable Markov decision processes with imprecise parameters / rank
 
Normal rank
Property / cites work
 
Property / cites work: Planning and acting in partially observable stochastic domains / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4315289 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Bounded Parameter Markov Decision Processes with Average Reward Criterion / rank
 
Normal rank

Latest revision as of 13:07, 6 July 2024

scientific article
Language Label Description Also known as
English
Policy iteration for bounded-parameter POMDPs
scientific article

    Statements

    Policy iteration for bounded-parameter POMDPs (English)
    0 references
    0 references
    0 references
    11 June 2013
    0 references
    0 references
    decision making under uncertainty
    0 references
    bounded-parameter POMDP
    0 references
    policy iteration
    0 references
    optimistic optimality
    0 references
    finite-state controller
    0 references
    \(\epsilon\)-optimal policy
    0 references
    0 references