A note on optimization formulations of Markov decision processes

From MaRDI portal
Publication:2129661

DOI10.4310/CMS.2022.V20.N3.A5zbMATH Open1491.60123arXiv2012.09417OpenAlexW3111550499MaRDI QIDQ2129661FDOQ2129661


Authors: Lexing Ying, Yuhua Zhu Edit this on Wikidata


Publication date: 22 April 2022

Published in: Communications in Mathematical Sciences (Search for Journal in Brave)

Abstract: This note summarizes the optimization formulations used in the study of Markov decision processes. We consider both the discounted and undiscounted processes under the standard and the entropy-regularized settings. For each setting, we first summarize the primal, dual, and primal-dual problems of the linear programming formulation. We then detail the connections between these problems and other formulations for Markov decision processes such as the Bellman equation and the policy gradient method.


Full work available at URL: https://arxiv.org/abs/2012.09417




Recommendations





Cited In (6)





This page was built for publication: A note on optimization formulations of Markov decision processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2129661)