Tighter low-rank approximation via sampling the leveraged element
From MaRDI portal
Publication:5362997
DOI10.1137/1.9781611973730.62zbMATH Open1371.68320arXiv1410.3886OpenAlexW2951791000MaRDI QIDQ5362997FDOQ5362997
Authors: Srinadh Bhojanapalli, Prateek Jain, Sujay Sanghavi
Publication date: 5 October 2017
Published in: Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms (Search for Journal in Brave)
Abstract: In this work, we propose a new randomized algorithm for computing a low-rank approximation to a given matrix. Taking an approach different from existing literature, our method first involves a specific biased sampling, with an element being chosen based on the leverage scores of its row and column, and then involves weighted alternating minimization over the factored form of the intended low-rank matrix, to minimize error only on these samples. Our method can leverage input sparsity, yet produce approximations in {em spectral} (as opposed to the weaker Frobenius) norm; this combines the best aspects of otherwise disparate current results, but with a dependence on the condition number . In particular we require computations to generate a rank- approximation to in spectral norm. In contrast, the best existing method requires time to compute an approximation in Frobenius norm. Besides the tightness in spectral norm, we have a better dependence on the error . Our method is naturally and highly parallelizable. Our new approach enables two extensions that are interesting on their own. The first is a new method to directly compute a low-rank approximation (in efficient factored form) to the product of two given matrices; it computes a small random set of entries of the product, and then executes weighted alternating minimization (as before) on these. The sampling strategy is different because now we cannot access leverage scores of the product matrix (but instead have to work with input matrices). The second extension is an improved algorithm with smaller communication complexity for the distributed PCA setting (where each server has small set of rows of the matrix, and want to compute low rank approximation with small amount of communication with other servers).
Full work available at URL: https://arxiv.org/abs/1410.3886
Recommendations
- Input sparsity time low-rank approximation via ridge leverage score sampling
- Fast computation of low rank matrix approximations
- A fast and efficient algorithm for low-rank approximation of a matrix
- Adaptive Sampling and Fast Low-Rank Matrix Approximation
- Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix
Cited In (9)
- Estimating Leverage Scores via Rank Revealing Methods and Randomization
- Multilayer tensor factorization with applications to recommender systems
- Noisy tensor completion via the sum-of-squares hierarchy
- Title not available (Why is that?)
- Towards Optimal Moment Estimation in Streaming and Distributed Models
- Robust PCA by manifold optimization
- Literature survey on low rank approximation of matrices
- Communication-efficient distributed covariance sketch, with application to distributed PCA
- Cross: efficient low-rank tensor completion
This page was built for publication: Tighter low-rank approximation via sampling the leveraged element
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5362997)