Single-Pass PCA of Large High-Dimensional Data

From MaRDI portal
Publication:91640

DOI10.48550/ARXIV.1704.07669arXiv1704.07669MaRDI QIDQ91640FDOQ91640


Authors: Wenjian Yu, Yu Gu, Jian Li, Shenghua Liu, Yaohang Li Edit this on Wikidata


Publication date: 25 April 2017

Abstract: Principal component analysis (PCA) is a fundamental dimension reduction tool in statistics and machine learning. For large and high-dimensional data, computing the PCA (i.e., the singular vectors corresponding to a number of dominant singular values of the data matrix) becomes a challenging task. In this work, a single-pass randomized algorithm is proposed to compute PCA with only one pass over the data. It is suitable for processing extremely large and high-dimensional data stored in slow memory (hard disk) or the data generated in a streaming fashion. Experiments with synthetic and real data validate the algorithm's accuracy, which has orders of magnitude smaller error than an existing single-pass algorithm. For a set of high-dimensional data stored as a 150 GB file, the proposed algorithm is able to compute the first 50 principal components in just 24 minutes on a typical 24-core computer, with less than 1 GB memory cost.








Cited In (1)





This page was built for publication: Single-Pass PCA of Large High-Dimensional Data

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q91640)