Top2Vec: Distributed Representations of Topics

From MaRDI portal
Publication:128481

DOI10.48550/ARXIV.2008.09470arXiv2008.09470MaRDI QIDQ128481FDOQ128481

Dimo Angelov

Publication date: 19 August 2020

Abstract: Topic modeling is used for discovering latent semantic structure, usually referred to as topics, in a large collection of documents. The most widely used methods are Latent Dirichlet Allocation and Probabilistic Latent Semantic Analysis. Despite their popularity they have several weaknesses. In order to achieve optimal results they often require the number of topics to be known, custom stop-word lists, stemming, and lemmatization. Additionally these methods rely on bag-of-words representation of documents which ignore the ordering and semantics of words. Distributed representations of documents and words have gained popularity due to their ability to capture semantics of words and documents. We present exttttop2vec, which leverages joint document and word semantic embedding to find extittopicvectors. This model does not require stop-word lists, stemming or lemmatization, and it automatically finds the number of topics. The resulting topic vectors are jointly embedded with the document and word vectors with distance between them representing semantic similarity. Our experiments demonstrate that exttttop2vec finds topics which are significantly more informative and representative of the corpus trained on than probabilistic generative models.








Cited In (1)





This page was built for publication: Top2Vec: Distributed Representations of Topics

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q128481)