Sufficient Conditions for the Value Function and Optimal Strategy to be Even and Quasi-Convex
From MaRDI portal
Publication:4559530
DOI10.1109/TAC.2018.2800796zbMATH Open1423.90267arXiv1703.10746OpenAlexW2964042550MaRDI QIDQ4559530FDOQ4559530
Authors: Jhelum Chakravorty, Aditya Mahajan
Publication date: 4 December 2018
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Abstract: Sufficient conditions are identified under which the value function and the optimal strategy of a Markov decision process (MDP) are even and quasi-convex in the state. The key idea behind these conditions is the following. First, sufficient conditions for the value function and optimal strategy to be even are identified. Next, it is shown that if the value function and optimal strategy are even, then one can construct a "folded MDP" defined only on the non-negative values of the state space. Then, the standard sufficient conditions for the value function and optimal strategy to be monotone are "unfolded" to identify sufficient conditions for the value function and the optimal strategy to be quasi-convex. The results are illustrated by using an example of power allocation in remote estimation.
Full work available at URL: https://arxiv.org/abs/1703.10746
Sensitivity, stability, parametric optimization (90C31) Markov and semi-Markov decision processes (90C40)
Cited In (2)
This page was built for publication: Sufficient Conditions for the Value Function and Optimal Strategy to be Even and Quasi-Convex
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4559530)