Relations among some low-rank subspace recovery models

From MaRDI portal
Publication:5380321

DOI10.1162/NECO_A_00762zbMATH Open1418.62250arXiv1412.2196WikidataQ40751270 ScholiaQ40751270MaRDI QIDQ5380321FDOQ5380321

Chao Zhang, Hongyang Zhang, Junbin Gao, Zhouchen Lin

Publication date: 4 June 2019

Published in: Neural Computation (Search for Journal in Brave)

Abstract: Recovering intrinsic low dimensional subspaces from data distributed on them is a key preprocessing step to many applications. In recent years, there has been a lot of work that models subspace recovery as low rank minimization problems. We find that some representative models, such as Robust Principal Component Analysis (R-PCA), Robust Low Rank Representation (R-LRR), and Robust Latent Low Rank Representation (R-LatLRR), are actually deeply connected. More specifically, we discover that once a solution to one of the models is obtained, we can obtain the solutions to other models in closed-form formulations. Since R-PCA is the simplest, our discovery makes it the center of low rank subspace recovery models. Our work has two important implications. First, R-PCA has a solid theoretical foundation. Under certain conditions, we could find better solutions to these low rank models at overwhelming probabilities, although these models are non-convex. Second, we can obtain significantly faster algorithms for these models by solving R-PCA first. The computation cost can be further cut by applying low complexity randomized algorithms, e.g., our novel ell2,1 filtering algorithm, to R-PCA. Experiments verify the advantages of our algorithms over other state-of-the-art ones that are based on the alternating direction method.


Full work available at URL: https://arxiv.org/abs/1412.2196




Recommendations




Cites Work


Cited In (7)

Uses Software





This page was built for publication: Relations among some low-rank subspace recovery models

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5380321)