Algebraic results and bottom-up algorithm for policies generalization in reinforcement learning using concept lattices
From MaRDI portal
Publication:1003553
DOI10.1016/J.NAHS.2006.12.001zbMATH Open1169.68042OpenAlexW2025228809WikidataQ115039494 ScholiaQ115039494MaRDI QIDQ1003553FDOQ1003553
Authors: Marc Ricordeau, Michel Liquière
Publication date: 4 March 2009
Published in: Nonlinear Analysis. Hybrid Systems (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.nahs.2006.12.001
Recommendations
- Algebraic reinforcement learning. Hypothesis induction for relational reinforcement learning using term generalization.
- Learning generalized policies from planning examples using concept languages
- Towards min max generalization in reinforcement learning
- scientific article; zbMATH DE number 67800
- A generalization error for Q-learning
Cites Work
- Title not available (Why is that?)
- \({\mathcal Q}\)-learning
- Title not available (Why is that?)
- Title not available (Why is that?)
- Equivalence notions and model minimization in Markov decision processes
- Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning
- Relational reinforcement learning
Cited In (4)
This page was built for publication: Algebraic results and bottom-up algorithm for policies generalization in reinforcement learning using concept lattices
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1003553)