Active Subspace: Toward Scalable Low-Rank Learning
From MaRDI portal
Publication:2840899
DOI10.1162/NECO_a_00369zbMath1268.68139WikidataQ50785539 ScholiaQ50785539MaRDI QIDQ2840899
Publication date: 23 July 2013
Published in: Neural Computation (Search for Journal in Brave)
68T05: Learning and adaptive systems in artificial intelligence
Related Items
Fast Estimation of Approximate Matrix Ranks Using Spectral Densities, Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning, Robust bilinear factorization with missing and grossly corrupted observations, Robust alternating low-rank representation by joint \(L_p\)- and \(L_{2,p}\)-norm minimization, Dynamic behavior analysis via structured rank minimization, Similarity preserving low-rank representation for enhanced data representation and effective subspace learning, Parallel active subspace decomposition for tensor robust principal component analysis, Scalable Low-Rank Representation
Uses Software
Cites Work
- Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
- TILT: transform invariant low-rank textures
- Local minima and convergence in low-rank semidefinite programming
- Fast singular value thresholding without singular value decomposition
- Exact matrix completion via convex optimization
- Robust principal component analysis?
- A Singular Value Thresholding Algorithm for Matrix Completion
- Rank-Sparsity Incoherence for Matrix Decomposition
- The Geometry of Algorithms with Orthogonality Constraints