Nearly optimal stochastic approximation for online principal subspace estimation

From MaRDI portal
Publication:6041665

DOI10.1007/S11425-021-1972-5zbMATH Open1515.65027arXiv1711.06644WikidataQ114222406 ScholiaQ114222406MaRDI QIDQ6041665FDOQ6041665


Authors:


Publication date: 12 May 2023

Published in: Science China. Mathematics (Search for Journal in Brave)

Abstract: Principal component analysis (PCA) has been widely used in analyzing high-dimensional data. It converts a set of observed data points of possibly correlated variables into a set of linearly uncorrelated variables via an orthogonal transformation. To handle streaming data and reduce the complexities of PCA, (subspace) online PCA iterations were proposed to iteratively update the orthogonal transformation by taking one observed data point at a time. Existing works on the convergence of (subspace) online PCA iterations mostly focus on the case where sample are almost surely uniformly bounded. In this paper, we analyze the convergence of a subspace online PCA iteration under more practical assumption and obtain a nearly optimal finite-sample error bound. Our convergence rate almost matches the minimax information lower bound. We prove that the convergence is nearly global in the sense that the subspace online PCA iteration is convergent with high probability for random initial guesses. This work also leads to a simpler proof of the recent work on analyzing online PCA for the first principal component only.


Full work available at URL: https://arxiv.org/abs/1711.06644




Recommendations




Cites Work


Cited In (12)





This page was built for publication: Nearly optimal stochastic approximation for online principal subspace estimation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6041665)