On the Difficulty of Evaluating Baselines: A Study on Recommender Systems

From MaRDI portal
Publication:65689

DOI10.48550/ARXIV.1905.01395arXiv1905.01395MaRDI QIDQ65689FDOQ65689

Li Zhang, Yehuda Koren, Steffen Rendle

Publication date: 4 May 2019

Abstract: Numerical evaluations with comparisons to baselines play a central role when judging research in recommender systems. In this paper, we show that running baselines properly is difficult. We demonstrate this issue on two extensively studied datasets. First, we show that results for baselines that have been used in numerous publications over the past five years for the Movielens 10M benchmark are suboptimal. With a careful setup of a vanilla matrix factorization baseline, we are not only able to improve upon the reported results for this baseline but even outperform the reported results of any newly proposed method. Secondly, we recap the tremendous effort that was required by the community to obtain high quality results for simple methods on the Netflix Prize. Our results indicate that empirical findings in research papers are questionable unless they were obtained on standardized benchmarks where baselines have been tuned extensively by the research community.








Cited In (1)





This page was built for publication: On the Difficulty of Evaluating Baselines: A Study on Recommender Systems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q65689)