Counterfactual state explanations for reinforcement learning agents via generative deep learning
From MaRDI portal
Publication:2238641
DOI10.1016/J.ARTINT.2021.103455OpenAlexW3124922852MaRDI QIDQ2238641FDOQ2238641
Authors: Matthew L. Olson, Roli Khanna, Lawrence Neal, Fuxin Li, Weng-Keen Wong
Publication date: 2 November 2021
Published in: Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2101.12446
Recommendations
- Local and global explanations of agent behavior: integrating strategy summaries with saliency maps
- Using state abstractions to compute personalized contrastive explanations for AI agent behavior
- A Symbolic Approach for Counterfactual Explanations
- A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C
- An ASP-based approach to counterfactual explanations for classification
Cites Work
Cited In (3)
Uses Software
This page was built for publication: Counterfactual state explanations for reinforcement learning agents via generative deep learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2238641)