Task-guided IRL in POMDPs that scales

From MaRDI portal
Revision as of 07:48, 10 July 2024 by Import240710060729 (talk | contribs) (Created automatically from import240710060729)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:6157197

DOI10.1016/J.ARTINT.2023.103856arXiv2301.01219OpenAlexW4313641965MaRDI QIDQ6157197

Christian Ellis, Ufuk Topcu, Murat Cubuktepe, Unnamed Author, Craig Lennon

Publication date: 19 June 2023

Published in: Artificial Intelligence (Search for Journal in Brave)

Abstract: In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information.


Full work available at URL: https://arxiv.org/abs/2301.01219





Cites Work






This page was built for publication: Task-guided IRL in POMDPs that scales