Bridging the gap between reinforcement learning and knowledge representation: a logical off- and on-policy framework

From MaRDI portal
Publication:3011967

DOI10.1007/978-3-642-22152-1_40zbMATH Open1341.68231arXiv1012.1552OpenAlexW1672997863MaRDI QIDQ3011967FDOQ3011967


Authors: Emad W. Saad Edit this on Wikidata


Publication date: 29 June 2011

Published in: Lecture Notes in Computer Science (Search for Journal in Brave)

Abstract: Knowledge Representation is important issue in reinforcement learning. In this paper, we bridge the gap between reinforcement learning and knowledge representation, by providing a rich knowledge representation framework, based on normal logic programs with answer set semantics, that is capable of solving model-free reinforcement learning problems for more complex do-mains and exploits the domain-specific knowledge. We prove the correctness of our approach. We show that the complexity of finding an offline and online policy for a model-free reinforcement learning problem in our approach is NP-complete. Moreover, we show that any model-free reinforcement learning problem in MDP environment can be encoded as a SAT problem. The importance of that is model-free reinforcement


Full work available at URL: https://arxiv.org/abs/1012.1552




Recommendations



Cites Work


Cited In (4)

Uses Software





This page was built for publication: Bridging the gap between reinforcement learning and knowledge representation: a logical off- and on-policy framework

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3011967)