Continuous-time Markov decision processes with state-dependent discount factors (Q693162): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
ReferenceBot (talk | contribs)
Changed an Item
 
(2 intermediate revisions by 2 users not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s10440-012-9669-3 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2023432628 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Markov decision processes with exponentially representable discounting / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuous time control of Markov processes on an arbitrary state space: Discounted rewards / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3237805 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuous Time Discounted Jump Markov Decision Processes: A Discrete-Event Approach / rank
 
Normal rank
Property / cites work
 
Property / cites work: Markov Decision Models with Weighted Discounted Criteria / rank
 
Normal rank
Property / cites work
 
Property / cites work: Constrained dynamic programming with two discount factors: applications and an algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4061759 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Markov control processes with randomized discounted cost / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuous-Time Markov Decision Processes with Discounted Rewards: The Case of Polish Spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: Constrained Optimization for Average Cost Continuous-Time Markov Decision Processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuous-time controlled Markov chains. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuous-time Markov decision processes. Theory and applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Discounted continuous-time constrained Markov decision processes in Polish spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: New discount and average optimality conditions for continuous-time Markov decision processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Average Optimality in Markov Control Processes via Discounted-Cost Problems and Linear Programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4255598 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5599448 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Continuously Discounted Markov Decision Model with Countable State and Action Space / rank
 
Normal rank
Property / cites work
 
Property / cites work: Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal / rank
 
Normal rank
Property / cites work
 
Property / cites work: Construction and regularity of transition functions on Polish spaces under measurability conditions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Existence and regularity of a nonhomogeneous transition matrix under measurability conditions / rank
 
Normal rank

Latest revision as of 23:56, 5 July 2024

scientific article
Language Label Description Also known as
English
Continuous-time Markov decision processes with state-dependent discount factors
scientific article

    Statements

    Continuous-time Markov decision processes with state-dependent discount factors (English)
    0 references
    0 references
    0 references
    7 December 2012
    0 references
    0 references
    continuous-time Markov decision processes
    0 references
    state-dependent discount factor
    0 references
    dynamic programming
    0 references
    explicit and exact solution
    0 references
    explicit expression
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references