Discounted MDP’s: Distribution Functions and Exponential Utility Maximization

From MaRDI portal
Publication:3754453

DOI10.1137/0325004zbMath0617.90085OpenAlexW1992154527MaRDI QIDQ3754453

Matthew J. Sobel, Kun-Jen Chung

Publication date: 1987

Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1137/0325004




Related Items (37)

Risk sensitive control of Markov processes in countable state spaceA consumption and investment problem via a Markov decision processes approach with random horizonTarget-level criterion in Markov decision processesRisk-sensitive dynamic market share attraction gamesMarkov Decision Problems Where Means Bound VariancesZero-sum semi-Markov games with a probability criterionRisk-averse dynamic programming for Markov decision processesRisk-sensitive semi-Markov decision problems with discounted cost and general utilitiesOn risk-sensitive piecewise deterministic Markov decision processesOptimizing a single-product production-inventory system under constant absolute risk aversionDistorted probability operator for dynamic portfolio optimization in times of socio-economic crisisContinuous-Time Markov Decision Processes with Exponential UtilityUnnamed ItemOn solutions of the distributional Bellman equationUnnamed ItemFirst Passage Exponential Optimality Problem for Semi-Markov Decision ProcessesControlled Markov decision processes with AVaR criteria for unbounded costsA note on negative dynamic programming for risk-sensitive controlStopped decision processes in conjunction with general utilityDiscounted Markov decision processes with utility constraintsOn terminating Markov decision processes with a risk-averse objective functionExit time risk-sensitive control for systems of cooperative agentsRisk-sensitive dividend problemsRisk-sensitive control of continuous time Markov chainsRisk-sensitive semi-Markov decision processes with general utilities and multiple criteriaApproximate Markov-Nash Equilibria for Discrete-Time Risk-Sensitive Mean-Field GamesOptimization models for the first arrival target distribution function in discrete timeA Differential Game for a Multiclass Queueing Model in the Moderate-Deviation Heavy-Traffic RegimeMinimizing risk models in Markov decision processes with policies depending on target valuesStochastic optimization of forward recursive functionsOn the General Utility of Discounted Markov Decision ProcessesAn active-set strategy to solve Markov decision processes with good-deal risk measureOptimal policy for minimizing risk models in Markov decision processesMean-variance criteria in an undiscounted Markov decision processMaximal mean/standard deviation ratio in an undiscounted MDPAlgorithmic aspects of mean-variance optimization in Markov decision processesNotes on average Markov decision processes with a minimum-variance criterion




This page was built for publication: Discounted MDP’s: Distribution Functions and Exponential Utility Maximization