Active Subspace: Toward Scalable Low-Rank Learning
From MaRDI portal
Publication:2840899
DOI10.1162/NECO_a_00369zbMath1268.68139WikidataQ50785539 ScholiaQ50785539MaRDI QIDQ2840899
Publication date: 23 July 2013
Published in: Neural Computation (Search for Journal in Brave)
Related Items (8)
Scalable Low-Rank Representation ⋮ Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning ⋮ Parallel active subspace decomposition for tensor robust principal component analysis ⋮ Robust alternating low-rank representation by joint \(L_p\)- and \(L_{2,p}\)-norm minimization ⋮ Dynamic behavior analysis via structured rank minimization ⋮ Fast Estimation of Approximate Matrix Ranks Using Spectral Densities ⋮ Robust bilinear factorization with missing and grossly corrupted observations ⋮ Similarity preserving low-rank representation for enhanced data representation and effective subspace learning
Uses Software
Cites Work
- Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
- TILT: transform invariant low-rank textures
- Local minima and convergence in low-rank semidefinite programming
- Fast singular value thresholding without singular value decomposition
- Exact matrix completion via convex optimization
- Robust principal component analysis?
- A Singular Value Thresholding Algorithm for Matrix Completion
- Rank-Sparsity Incoherence for Matrix Decomposition
- The Geometry of Algorithms with Orthogonality Constraints
This page was built for publication: Active Subspace: Toward Scalable Low-Rank Learning