Improving the Incoherence of a Learned Dictionary via Rank Shrinkage
From MaRDI portal
Publication:5380643
DOI10.1162/NECO_a_00907zbMath1474.68274OpenAlexW2532339373WikidataQ50483044 ScholiaQ50483044MaRDI QIDQ5380643
Shashanka Ubaru, Abd-Krim Seghouane, Yousef Saad
Publication date: 6 June 2019
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco_a_00907
Related Items (1)
Uses Software
Cites Work
- On the conditioning of random subdictionaries
- Asymptotic bootstrap corrections of AIC for linear regression models
- Reduced-rank regression for the multivariate linear model
- Optimized projections for compressed sensing via rank-constrained nearest correlation matrix
- Pathwise coordinate optimization
- Atomic Decomposition by Basis Pursuit
- Better Subset Regression Using the Nonnegative Garrote
- Greed is Good: Algorithmic Results for Sparse Approximation
- Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
- Sparse and Redundant Representations
- Dictionary Learning Algorithms for Sparse Representation
- Optimized Projections for Compressed Sensing
- Dictionary Preconditioning for Greedy Algorithms
- $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
- On the Non-Negative Garrotte Estimator
This page was built for publication: Improving the Incoherence of a Learned Dictionary via Rank Shrinkage