More on low rank approximation of a matrix

From MaRDI portal
Publication:6320372

arXiv1906.04929MaRDI QIDQ6320372FDOQ6320372


Authors: Qi Luan, Victor Y. Pan Edit this on Wikidata


Publication date: 10 June 2019

Abstract: We call a matrix algorithm superfast (aka running at sublinear cost) if it involves much fewer flops and memory cells than the matrix has entries. Using such algorithms is highly desired or even imperative in computations for Big Data, which involve immense matrices and are quite typically reduced to solving linear least squares problem and/or computation of low rank approximation of an input matrix. The known algorithms for these problems are not superfast, but we prove that their certain superfast modifications output reasonable or even nearly optimal solutions for large input classes. We also propose, analyze, and test a novel superfast algorithm for iterative refinement of any crude but sufficiently close low rank approximation of a matrix. The results of our numerical tests are in good accordance with our formal study.













This page was built for publication: More on low rank approximation of a matrix

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6320372)