Dictionary Learning With Few Samples and Matrix Concentration
From MaRDI portal
Abstract: Let be an matrix, be an matrix and . A challenging and important problem in data analysis, motivated by dictionary learning and other practical problems, is to recover both and , given . Under normal circumstances, it is clear that this problem is underdetermined. However, in the case when is sparse and random, Spielman, Wang and Wright showed that one can recover both and efficiently from with high probability, given that (the number of samples) is sufficiently large. Their method works for and they conjectured that suffices. The bound is sharp for an obvious information theoretical reason. In this paper, we show that suffices, matching the conjectural bound up to a polylogarithmic factor. The core of our proof is a theorem concerning concentration of random matrices, which is of independent interest. Our proof of the concentration result is based on two ideas. The first is an economical way to apply the union bound. The second is a refined version of Bernstein's concentration inequality for the sum of independent variables. Both have nothing to do with random matrices and are applicable in general settings.
Cited in
(8)- Dictionary Learning With BLOTLESS Update
- Dictionary Learning for Two-Dimensional Kendall Shapes
- Sparse random matrices have simple spectrum
- Dictionary Learning on Grassmann Manifolds
- A novel dictionary learning method based on total least squares approach with application in high dimensional biological data
- Dictionary learning based on nonnegative matrix factorization using parallel coordinate descent
- Minimax Lower Bounds on Dictionary Learning for Tensor Data
- Trainlets: Dictionary Learning in High Dimensions
This page was built for publication: Dictionary Learning With Few Samples and Matrix Concentration
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2976985)