TabNet: Attentive Interpretable Tabular Learning

From MaRDI portal
Publication:120340

DOI10.48550/ARXIV.1908.07442arXiv1908.07442MaRDI QIDQ120340FDOQ120340


Authors: Sercan O. Arik, Tomas Pfister Edit this on Wikidata


Publication date: 20 August 2019

Abstract: We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other neural network and decision tree variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into the global model behavior. Finally, for the first time to our knowledge, we demonstrate self-supervised learning for tabular data, significantly improving performance with unsupervised representation learning when unlabeled data is abundant.








Cited In (1)





This page was built for publication: TabNet: Attentive Interpretable Tabular Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q120340)