Accelerating nonnegative matrix factorization algorithms using extrapolation

From MaRDI portal
Publication:3379602

DOI10.1162/NECO_A_01157zbMATH Open1470.65083DBLPjournals/neco/AngG19arXiv1805.06604OpenAlexW2804176448WikidataQ90701089 ScholiaQ90701089MaRDI QIDQ3379602FDOQ3379602


Authors: Andersen Man Shun Ang, Nicolas Gillis Edit this on Wikidata


Publication date: 27 September 2021

Published in: Neural Computation (Search for Journal in Brave)

Abstract: In this paper, we propose a general framework to accelerate significantly the algorithms for nonnegative matrix factorization (NMF). This framework is inspired from the extrapolation scheme used to accelerate gradient methods in convex optimization and from the method of parallel tangents. However, the use of extrapolation in the context of the two-block exact coordinate descent algorithms tackling the non-convex NMF problems is novel. We illustrate the performance of this approach on two state-of-the-art NMF algorithms, namely, accelerated hierarchical alternating least squares (A-HALS) and alternating nonnegative least squares (ANLS), using synthetic, image and document data sets.


Full work available at URL: https://arxiv.org/abs/1805.06604




Recommendations



Cites Work


Cited In (10)

Uses Software





This page was built for publication: Accelerating nonnegative matrix factorization algorithms using extrapolation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3379602)