Discounted continuous-time Markov decision processes with unbounded rates and randomized history-dependent policies: the dynamic programming approach (Q457293): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Normalize DOI.
 
(9 intermediate revisions by 7 users not shown)
Property / DOI
 
Property / DOI: 10.1007/s10288-013-0236-1 / rank
Normal rank
 
Property / author
 
Property / author: Aleksey B. Piunovskiy / rank
Normal rank
 
Property / author
 
Property / author: Yi Zhang / rank
Normal rank
 
Property / author
 
Property / author: Yi Zhang / rank
 
Normal rank
Property / author
 
Property / author: Aleksey B. Piunovskiy / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 90C40 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 60J25 / rank
 
Normal rank
Property / zbMATH DE Number
 
Property / zbMATH DE Number: 6348523 / rank
 
Normal rank
Property / zbMATH Keywords
 
Bellman equation
Property / zbMATH Keywords: Bellman equation / rank
 
Normal rank
Property / zbMATH Keywords
 
continuous-time Markov decision process
Property / zbMATH Keywords: continuous-time Markov decision process / rank
 
Normal rank
Property / zbMATH Keywords
 
dynamic programming
Property / zbMATH Keywords: dynamic programming / rank
 
Normal rank
Property / zbMATH Keywords
 
dynkin's formula
Property / zbMATH Keywords: dynkin's formula / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2016588739 / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: 1103.0134 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic optimal control. The discrete time case / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuous Time Discounted Jump Markov Decision Processes: A Discrete-Event Approach / rank
 
Normal rank
Property / cites work
 
Property / cites work: Reduction of Discounted Continuous-Time MDPs with Unbounded Jump and Reward Rates to Discrete-Time Total-Reward MDPs / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuous-Time Markov Decision Processes with Discounted Rewards: The Case of Polish Spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuous-time Markov decision processes. Theory and applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Discounted Continuous-Time Markov Decision Processes with Constraints: Unbounded Transition and Loss Rates / rank
 
Normal rank
Property / cites work
 
Property / cites work: Discounted continuous-time constrained Markov decision processes in Polish spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: Denumerable-state continuous-time Markov decision processes with unbounded transition and reward rates under the discounted criterion / rank
 
Normal rank
Property / cites work
 
Property / cites work: A survey of recent results on continuous-time Markov decision processes (with comments and rejoinder) / rank
 
Normal rank
Property / cites work
 
Property / cites work: Linear Programming and Constrained Average Optimality for General Continuous-Time Markov Decision Processes in History-Dependent Policies / rank
 
Normal rank
Property / cites work
 
Property / cites work: Absorbing Continuous-Time Markov Decision Processes with Total Cost Criteria / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4255598 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multivariate point processes: predictable projection, Radon-Nikodym derivatives, representation of martingales / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuously Discounted Markov Decision Model with Countable State and Action Space / rank
 
Normal rank
Property / cites work
 
Property / cites work: Semi-Markov and Jump Markov Controlled Models: Average Cost Criterion / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4331806 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Accuracy of fluid approximations to controlled birth-and-death processes: absorbing case / rank
 
Normal rank
Property / cites work
 
Property / cites work: Discounted Continuous-Time Markov Decision Processes with Unbounded Rates: The Convex Analytic Approach / rank
 
Normal rank
Property / cites work
 
Property / cites work: Discounted continuous-time Markov decision processes with unbounded rates and randomized history-dependent policies: the dynamic programming approach / rank
 
Normal rank
Property / cites work
 
Property / cites work: The transformation method for continuous-time Markov decision processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Selected Topics on Continuous-Time Controlled Markov Chains and Markov Games / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4315289 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuous-Time Markov Decision Processes with Unbounded Transition and Discounted-Reward Rates / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1007/S10288-013-0236-1 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 18:06, 9 December 2024

scientific article
Language Label Description Also known as
English
Discounted continuous-time Markov decision processes with unbounded rates and randomized history-dependent policies: the dynamic programming approach
scientific article

    Statements

    Discounted continuous-time Markov decision processes with unbounded rates and randomized history-dependent policies: the dynamic programming approach (English)
    0 references
    0 references
    0 references
    0 references
    26 September 2014
    0 references
    Bellman equation
    0 references
    continuous-time Markov decision process
    0 references
    dynamic programming
    0 references
    dynkin's formula
    0 references
    0 references
    0 references
    0 references

    Identifiers