Nearly optimal stochastic approximation for online principal subspace estimation
From MaRDI portal
Publication:6041665
Abstract: Principal component analysis (PCA) has been widely used in analyzing high-dimensional data. It converts a set of observed data points of possibly correlated variables into a set of linearly uncorrelated variables via an orthogonal transformation. To handle streaming data and reduce the complexities of PCA, (subspace) online PCA iterations were proposed to iteratively update the orthogonal transformation by taking one observed data point at a time. Existing works on the convergence of (subspace) online PCA iterations mostly focus on the case where sample are almost surely uniformly bounded. In this paper, we analyze the convergence of a subspace online PCA iteration under more practical assumption and obtain a nearly optimal finite-sample error bound. Our convergence rate almost matches the minimax information lower bound. We prove that the convergence is nearly global in the sense that the subspace online PCA iteration is convergent with high probability for random initial guesses. This work also leads to a simpler proof of the recent work on analyzing online PCA for the first principal component only.
Recommendations
- Near-optimal stochastic approximation for online principal component estimation
- On the optimality of the Oja's algorithm for online PCA
- Convergence analysis of Oja's iteration for solving online PCA with nonzero-mean samples
- Online principal components analysis
- Widening the scope of an eigenvector stochastic approximation process and application to streaming PCA and related methods
Cites work
- scientific article; zbMATH DE number 3886886 (Why is no real title available?)
- scientific article; zbMATH DE number 47363 (Why is no real title available?)
- scientific article; zbMATH DE number 194139 (Why is no real title available?)
- scientific article; zbMATH DE number 1049347 (Why is no real title available?)
- A simplified neuron model as a principal component analyzer
- Foundations of Data Science
- Minimax sparse principal subspace estimation in high dimensions
- Near-optimal stochastic approximation for online principal component estimation
- Normal Multivariate Analysis and the Orthogonal Group
- On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix
- On the largest principal angle between random subspaces
- Statistics on special manifolds
- The special functions and their approximations. Vol. I, II
- User-friendly tail bounds for sums of random matrices
- Weak convergence and empirical processes. With applications to statistics
Cited in
(12)- Convergence analysis of Oja's iteration for solving online PCA with nonzero-mean samples
- Streaming principal component analysis from incomplete data
- Online principal components analysis
- Near-optimal stochastic approximation for online principal component estimation
- scientific article; zbMATH DE number 7370563 (Why is no real title available?)
- Widening the scope of an eigenvector stochastic approximation process and application to streaming PCA and related methods
- Stochastic Gauss-Newton algorithms for online PCA
- Online Schatten quasi-norm minimization for robust principal component analysis
- Optimal pivot path of the simplex method for linear programming based on reinforcement learning
- A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation
- On the optimality of the Oja's algorithm for online PCA
- Online Nonlinear Estimation via Iterative <inline-formula> <tex-math notation="LaTeX">$L^2$</tex-math> </inline-formula>-Space Projections: Reproducing Kernel of Subspace
This page was built for publication: Nearly optimal stochastic approximation for online principal subspace estimation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6041665)