Risk sensitive control of Markov processes in countable state space
From MaRDI portal
Publication:1350178
DOI10.1016/S0167-6911(96)00051-5zbMath0866.93101OpenAlexW2140282437MaRDI QIDQ1350178
Steven I. Marcus, Daniel Hernández-Hernández
Publication date: 27 February 1997
Published in: Systems \& Control Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/s0167-6911(96)00051-5
dynamic programmingMarkov processesdiscrete-timeStochastic controlIsaacs equationRisk-sensitive controlAverage costdenumerable state spaceinfinite horizon risk-sensitive controlStochastic dynamic gamesvanishing discount method
Related Items (39)
Portfolio management under drawdown constraint in discrete-time financial markets ⋮ Long run risk sensitive portfolio with general factors ⋮ Unnamed Item ⋮ Controlled semi-Markov chains with risk-sensitive average cost criterion ⋮ Optimality equations and inequalities in a class of risk-sensitive average cost Markov decision chains ⋮ A note on risk-sensitive control of invariant models ⋮ Risk-Sensitive Ergodic Control of Continuous Time Markov Processes With Denumerable State Space ⋮ Zero-Sum Risk-Sensitive Stochastic Differential Games ⋮ Local Poisson equations associated with discrete-time Markov control processes ⋮ Zero-sum semi-Markov games with a probability criterion ⋮ Risk-sensitive semi-Markov decision problems with discounted cost and general utilities ⋮ Contractive approximations in average Markov decision chains driven by a risk-seeking controller ⋮ Risk-Sensitive Reinforcement Learning via Policy Gradient Search ⋮ The Vanishing Discount Approach in a class of Zero-Sum Finite Games with Risk-Sensitive Average Criterion ⋮ Risk-sensitive control of pure jump process on countable space with near monotone cost ⋮ Dissipativity and risk-sensitivity in control problems ⋮ Risk-sensitive multiagent decision-theoretic planning based on MDP and one-switch utility functions ⋮ Risk-Sensitive Reinforcement Learning ⋮ Average optimality for risk-sensitive control with general state space ⋮ A Poisson equation for the risk-sensitive average cost in semi-Markov chains ⋮ A Variational Formula for Risk-Sensitive Reward ⋮ A characterization of the optimal risk-sensitive average cost in finite controlled Markov chains ⋮ On terminating Markov decision processes with a risk-averse objective function ⋮ Exit time risk-sensitive control for systems of cooperative agents ⋮ A discounted approach in communicating average Markov decision chains under risk-aversion ⋮ A sensitivity formula for risk-sensitive cost and the actor-critic algorithm ⋮ Zero-sum risk-sensitive stochastic games ⋮ Unnamed Item ⋮ Risk-sensitive control of continuous time Markov chains ⋮ Vanishing discount approximations in controlled Markov chains with risk-sensitive average criterion ⋮ Necessary and sufficient conditions for a solution to the risk-sensitive Poisson equation on a finite state space ⋮ Nonzero-Sum Risk-Sensitive Stochastic Games on a Countable State Space ⋮ Approximate Markov-Nash Equilibria for Discrete-Time Risk-Sensitive Mean-Field Games ⋮ Risk sensitive control of discrete time partially observed Markov Processes with Infinite Horizon ⋮ Infinite horizon risk sensitive control of discrete time Markov processes with small risk ⋮ Solutions of the average cost optimality equation for finite Markov decision chains: Risk-sensitive and risk-neutral criteria ⋮ Ergodic risk-sensitive control of Markov processes on countable state space revisited ⋮ Computational Methods for Risk-Averse Undiscounted Transient Markov Models ⋮ Continuity of the optimal average cost in Markov decision chains with small risk-sensitivity
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Necessary conditions for the optimality equation in average-reward Markov decision processes
- Asymptotic analysis of nonlinear stochastic risk-sensitive control and differential games
- Connections between stochastic control and dynamic games
- Adaptive Markov control processes
- Remarks on the existence of solutions to the average cost optimality equation in Markov decision processes
- Optimal Control of Partially Observable Stochastic Systems with an Exponential-of-Integral Performance Index
- Discounted MDP’s: Distribution Functions and Exponential Utility Maximization
- The equivalence between infinite-horizon optimal control of stochastic systems with exponential-of-integral performance index and stochastic differential games
- Risk-sensitive control and dynamic games for partially observed discrete-time nonlinear systems
- Risk-Sensitive Control of Finite State Machines on an Infinite Horizon I
- Discrete-Time Controlled Markov Processes with Average Cost Criterion: A Survey
- Risk-Sensitive Control on an Infinite Time Horizon
- Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic differential games
- Risk-Sensitive Markov Decision Processes
This page was built for publication: Risk sensitive control of Markov processes in countable state space