Limited memory block Krylov subspace optimization for computing dominant singular value decompositions
From MaRDI portal
Publication:2847729
Recommendations
- Low-rank incremental methods for computing dominant singular subspaces
- The singular value decomposition: anatomy of optimizing an algorithm for extreme scale
- Lanczos, Householder transformations, and implicit deflation for fast and reliable dominant singular subspace computation
- Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions
- Split-and-combine singular value decomposition for large-scale matrix
Cited in
(28)- Accelerating convergence by augmented Rayleigh-Ritz projections for large-scale eigenpair computation
- Estimating a few extreme singular values and vectors for large-scale matrices in tensor train format
- A New First-Order Algorithmic Framework for Optimization Problems with Orthogonality Constraints
- Background subtraction using adaptive singular value decomposition
- Low-rank incremental methods for computing dominant singular subspaces
- Preconditioners for nonsymmetric indefinite linear systems
- An efficient Gauss-Newton algorithm for symmetric low-rank product matrix approximations
- Research on the advances of the singular value decomposition and its application in high-dimensional data mining
- Limited memory restarted \(\ell^p\)-\(\ell^q\) minimization methods using generalized Krylov subspaces
- Finding low-rank solutions via nonconvex matrix factorization, efficiently and provably
- TRPL+K: Thick-Restart Preconditioned Lanczos+K Method for Large Symmetric Eigenvalue Problems
- Slow and finite-time relaxations to \(m\)-bipartite consensus on the Stiefel manifold
- A tensor train approach for internet traffic data completion
- A brief introduction to manifold optimization
- A refinement of approximate invariant subspaces of matrices based on SVD in high dimensionality reduction and image compression
- Hierarchical optimization for neutron scattering problems
- Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints
- Accelerating large partial EVD/SVD calculations by filtered block Davidson methods
- Seeking consensus on subspaces in federated principal component analysis
- Principal components: a descent algorithm
- Stochastic Gauss-Newton algorithms for online PCA
- A Riemannian conjugate gradient method for optimization on the Stiefel manifold
- Efficient proximal mapping computation for low-rank inducing norms
- Trace minimization method via penalty for linear response eigenvalue problems
- Subspace methods with local refinements for eigenvalue computation using low-rank tensor-train format
- A stochastic variance reduction method for PCA by an exact penalty approach
- Decomposition into low-rank plus additive matrices for background/foreground separation: a review for a comparative evaluation with a large-scale dataset
- The singular value decomposition: anatomy of optimizing an algorithm for extreme scale
This page was built for publication: Limited memory block Krylov subspace optimization for computing dominant singular value decompositions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2847729)