How to Explain Individual Classification Decisions

From MaRDI portal
Publication:87195

DOI10.48550/ARXIV.0912.1128arXiv0912.1128MaRDI QIDQ87195FDOQ87195

Motoaki Kawanabe, David Baehrens, Timon Schroeter, Katja Hansen, Stefan Harmeling, Klaus-Robert Mueller

Publication date: 6 December 2009

Abstract: After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted the particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.







Cited In (1)






This page was built for publication: How to Explain Individual Classification Decisions

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q87195)