An optimal control problem with a random stopping time (Q1823913): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Optimal control of piecewise deterministic markov process / rank
 
Normal rank
Property / cites work
 
Property / cites work: Necessary Conditions for Optimal Control Problems with Infinite Horizons / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Transversality Condition in Infinite Horizon Optimal Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Existence theorems for Lagrange control problems with unbounded time domain / rank
 
Normal rank
Property / cites work
 
Property / cites work: Control of systems with jump Markov disturbances / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sufficient Conditions for Optimality and the Justification of the Dynamic Programming Method / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Admissible Synthesis in Optimal Control Theory and Differential Games / rank
 
Normal rank
Property / cites work
 
Property / cites work: Dynamic Programming and Minimum Principles for Systems with Jump Markov Disturbances / rank
 
Normal rank

Latest revision as of 10:49, 20 June 2024

scientific article
Language Label Description Also known as
English
An optimal control problem with a random stopping time
scientific article

    Statements

    An optimal control problem with a random stopping time (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    1990
    0 references
    This paper deals with a stochastic optimal control problem where the randomness is essentially concentrated in the stopping time terminating the process. If the stopping time is characterized by an intensity depending on the state and control variables, one can reformulate the problem equivalently as an infinite-horizon optimal control problem. Applying dynamic programming and minimum principle techniques to this associated deterministic control problem yields specific optimality conditions for the original stochastic control problem. It is also possible to characterize extremal steady states. The model is illustrated by an example related to the economics of technological innovation.
    0 references
    0 references
    stochastic optimal control
    0 references
    infinite-horizon optimal control problem
    0 references
    dynamic programming
    0 references
    minimum principle techniques
    0 references