Explanation in artificial intelligence: insights from the social sciences

From MaRDI portal
Publication:2321252

DOI10.1016/J.ARTINT.2018.07.007zbMATH Open1478.68274arXiv1706.07269OpenAlexW2963095307WikidataQ102363022 ScholiaQ102363022MaRDI QIDQ2321252

Tim Miller

Publication date: 28 August 2019

Published in: Artificial Intelligence (Search for Journal in Brave)

Abstract: There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a `good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.


Full work available at URL: https://arxiv.org/abs/1706.07269





Cites Work


Cited In (90)






This page was built for publication: Explanation in artificial intelligence: insights from the social sciences

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2321252)