Task-guided IRL in POMDPs that scales
From MaRDI portal
Publication:6157197
DOI10.1016/J.ARTINT.2023.103856arXiv2301.01219OpenAlexW4313641965MaRDI QIDQ6157197
Christian Ellis, Ufuk Topcu, Murat Cubuktepe, Unnamed Author, Craig Lennon
Publication date: 19 June 2023
Published in: Artificial Intelligence (Search for Journal in Brave)
Abstract: In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information.
Full work available at URL: https://arxiv.org/abs/2301.01219
inverse reinforcement learningsequential convex optimizationplanning in partially observable environment
Cites Work
- Unnamed Item
- Unnamed Item
- Enforcing almost-sure reachability in POMDPs
- Verification and control of partially observable probabilistic systems
- Recent advances in trust region algorithms
- Robust control and model misspecification
- The Principle of Maximum Causal Entropy for Estimating Interacting Processes
- Infinite Time Horizon Maximum Causal Entropy Inverse Reinforcement Learning
- On Near Optimality of the Set of Finite-State Controllers for Average Cost POMDP
This page was built for publication: Task-guided IRL in POMDPs that scales