Geometry of policy improvement
From MaRDI portal
Publication:1689145
DOI10.1007/978-3-319-68445-1_33zbMATH Open1426.91076arXiv1704.01785OpenAlexW2606500941MaRDI QIDQ1689145FDOQ1689145
Authors: Guido Montúfar, Johannes Rauh
Publication date: 12 January 2018
Abstract: We investigate the geometry of optimal memoryless time independent decision making in relation to the amount of information that the acting agent has about the state of the system. We show that the expected long term reward, discounted or per time step, is maximized by policies that randomize among at most actions whenever at most world states are consistent with the agent's observation. Moreover, we show that the expected reward per time step can be studied in terms of the expected discounted reward. Our main tool is a geometric version of the policy improvement lemma, which identifies a polyhedral cone of policy changes in which the state value function increases for all states.
Full work available at URL: https://arxiv.org/abs/1704.01785
Recommendations
- A polynomial time bound for Howard's policy improvement algorithm
- Finding optimal memoryless policies of POMDPs under the expected average reward criterion
- On the complexity of finite memory policies for Markov decision processes
- Near-optimal reinforcement learning in polynomial time
- Constrained Discounted Dynamic Programming
reinforcement learningpartially observable Markov decision processmemoryless stochastic policypolicy gradient theorem
Cited In (3)
This page was built for publication: Geometry of policy improvement
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1689145)