Relaxed Equilibria for Time-Inconsistent Markov Decision Processes
From MaRDI portal
Publication:6443097
arXiv2307.04227MaRDI QIDQ6443097FDOQ6443097
Authors: Erhan Bayraktar, Yu-Jui Huang, Zhenhua Wang, Zhou Zhou
Publication date: 9 July 2023
Abstract: This paper considers an infinite-horizon Markov decision process (MDP) that allows for general non-exponential discount functions, in both discrete and continuous time. Due to the inherent time inconsistency, we look for a randomized equilibrium policy (i.e., relaxed equilibrium) in an intra-personal game between an agent's current and future selves. When we modify the MDP by entropy regularization, a relaxed equilibrium is shown to exist by a nontrivial entropy estimate. As the degree of regularization diminishes, the entropy-regularized MDPs approximate the original MDP, which gives the general existence of a relaxed equilibrium in the limit by weak convergence arguments. As opposed to prior studies that consider only deterministic policies, our existence of an equilibrium does not require any convexity (or concavity) of the controlled transition probabilities and reward function. Interestingly, this benefit of considering randomized policies is unique to the time-inconsistent case.
Markov chains (discrete-time Markov processes on discrete state spaces) (60J10) Continuous-time Markov processes on discrete state spaces (60J27) Equilibrium refinements (91A11)
This page was built for publication: Relaxed Equilibria for Time-Inconsistent Markov Decision Processes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6443097)