On State Aggregation to Approximate Complex Value Functions in Large-Scale Markov Decision Processes
From MaRDI portal
Publication:5347606
DOI10.1109/TAC.2010.2052697zbMATH Open1368.90167MaRDI QIDQ5347606FDOQ5347606
Author name not available (Why is that?)
Publication date: 25 August 2017
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Recommendations
- Performance Loss Bounds for Approximate Value Iteration with State Aggregation
- Extreme state aggregation beyond MDPs
- Extreme state aggregation beyond Markov decision processes
- Approximation of Markov decision processes with general state space
- Aggregation of the policy iteration method for nearly completely decomposable Markov chains
- Pseudometrics for State Aggregation in Average Reward Markov Decision Processes
- Approximate policy iteration for Markov decision processes via quantitative adaptive aggregations
- Approximating Markov decision processes using expected state transitions
- Finite-state approximations for denumerable multidimensional state discounted Markov decision processes
- Publication:4506458
Cited In (7)
- Modified iterative aggregation procedure for maintenance optimisation of multi-component systems with failure interaction
- Power and delay optimisation in multi-hop wireless networks
- Learning to agree over large state spaces
- FUZZY STATE AGGREGATION AND POLICY HILL CLIMBING FOR STOCHASTIC ENVIRONMENTS
- Parameterized Markov decision process and its application to service rate control
- Control-limit policies for a class of stopping time problems with termination restrictions
- Revenue management for operations with urgent orders
This page was built for publication: On State Aggregation to Approximate Complex Value Functions in Large-Scale Markov Decision Processes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5347606)