Exact matrix completion via convex optimization (Q2655288)

From MaRDI portal
Revision as of 22:57, 6 August 2023 by Importer (talk | contribs) (‎Created a new Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Exact matrix completion via convex optimization
scientific article

    Statements

    Exact matrix completion via convex optimization (English)
    0 references
    0 references
    0 references
    25 January 2010
    0 references
    The authors are concerned with recovering the data matrix from a sampling of its elements by solving a convex optimization problem. They first consider the case of low-rank matrices and prove that if the number \(m\) of sampled entries obeys \(m \geq Cn^{1.2}r \log n\) for some positive numerical constant \(C\), then with very high probability, most \(n\times n\) matrices of rank \(r\) can be perfectly recovered by solving a simple convex optimization problem. The authors show that the semidefinite program finds the matrix with minimum nuclear norm that fits the data. Replacing the exponent \(1.2\) by \(1.25\) gives that the result holds for all values of the rank. This interesting survey includes recent results on coherence and incoherence of linear subspaces of \(n\)-dimensional real space and on incoherence of unitary transformations by \textit{E. J. Candes, J. Romberg} [Inverse Probl. 23, No. 3, 969--985 (2007; Zbl 1120.94005)], on properties of sampling operator and on random orthogonal model. For a general discussion of approaches to convex optimization as well as specific technical results in the field, see the book by \textit{N. Shor} [Nondifferentiable Optimization and Polynomial Problems. Nonconvex Optimization and Its Applications. 24. Dordrecht: Kluwer Academic Publishers. (1998; Zbl 0901.49015)]. For a subspace \(U\) of dimension \(r\) of the \(n\)-dimensional real space and the orthogonal projection \({\mathcal P}_U\) onto \(U\), the authors of the paper define the coherence of \(U\) (vis-à-vis the standard basis \(({\mathbf e}_i)\)) as \[ \mu(U) \equiv \frac{n}{r} \max \| {\mathcal P}_U {\mathbf e}_i \|^2 . \] To state their main result, they introduce two assumptions about an \(n_1 \times n_2\) matrix \({\mathcal M}\) whose singular value decomposition is given by \({\mathcal M} = \sum_{1\leq k \leq r}\sigma_{k}{\mathbf u}_{k}{\mathbf v}^{*}_{k} \) and with column and row spaces denoted by \(U\) and \(V,\) respectively: (A0) The coherence obey \(\max(\mu(U), \mu(V)) \leq \mu_0\) for some positive \(\mu_0\). (A1) The \(n_1 \times n_2\) matrix \(\sum_{1\leq k \leq r}{\mathbf u}_{k}{\mathbf v}^{*}_{k} \) has a maximal entry bounded by \(\mu_1 \sqrt{r/(n_1 n_2)}\) in absolute value for some positive \(\mu_1\). Under these assumptions the main result of the paper asserts that if a matrix has row and column spaces that are incoherent with the standard basis, then the nuclear norm minimization can recover this matrix from a random sampling of a small number of entries \({\mathcal M}_{i j} \; (i,j) \in \Omega.\) The paper has eight sections: 1 Introduction. 2 Which Matrices Are Incoherent? 3 Duality. 4 Architecture of the Proof. 5 Connections with Random Graph Theory. 6 Proofs of the Critical Lemmas. 7 Numerical Experiments. 8 Discussion. The first one treats the problem of matrix completion via constraint minimization and justifies the author's choice of considering an alternative which minimizes the sum of nonvanishing singular values of the matrix \(X\) (the matrix nuclear norm \(\| X \|_{*}\)) over the constraint set (problem (1.5)): \[ \text{minimize } \| X \|_{*}\text{ subject to } X_{i j} = {\mathcal M}_{i j} \; (i,j) \in \Omega. \] The random orthogonal model and, more generally, matrices with incoherent column and row spaces are presented in Section 2. In Section 3 the authors establish sufficient conditions which guarantee that the true low-rank matrix \({\mathcal M}\) is the unique solution to problem (1.5). One of these conditions is the existence of a dual vector obeying two crucial properties. ``Section 4 constructs such a dual vector and provides the overall architecture of the proof which shows that, indeed, this vector obeys the desired properties provided that the number of measurements is sufficiently large. Surprisingly, as explored in Sect. 5, the existence of a dual vector certifying that \({\mathcal M}\) is unique is related to some problems in random graph theory including `the coupon collector's problem'. Following this discussion, we prove our main result via several intermediate results which are all proven in Sect. 6. Section 7 introduces numerical experiments showing that matrix completion based on nuclear norm minimization works well in practice. Section 8 closes the paper with a short summary of our findings, a discussion of important extensions and improvements.'' The appendix contains proofs of the fact (Theorem 4.2 from Section 4) that the operator \({\mathcal P}_{T} {\mathcal P}_{\Omega} {\mathcal P}_{T}\) (the product of projectors \({\mathcal P}_{T}\) onto the linear space \(T\) and the orthogonal projector \({\mathcal P}_{\Omega}\) onto the indices in the set \(\Omega\) of locations corresponding to the observed entries \({\mathcal M}_{i j}\)) does not deviate from its expected value in the part that follows from the concentration inequality of \textit{M. Talagrand} [Invent. Math. 126(3) 505--563 (1996; Zbl 0893.60001)], and the result on bounds of \(q\)th moment that is connected with the Schatten \(q\)-norm of a Rademacher series (Lemma 6.2 from Section 6). The proof of Lemma 6.2 is based on the noncommutative Khintchine inequality (Lemma 6.1 from Section 6) of \textit{F. Lust-Piquard} [C. R. Acad. Sci., Paris, Sér. I 303, 289--292 (1986; Zbl 0592.47038)] and \textit{A. Buchholz} [Math. Ann. 319, 1--16 (2001; Zbl 0991.46035)]. Some of the results and estimates used in proving the main theorem are of independent interest. There is a bibliography of about forty items.
    0 references
    matrix completion
    0 references
    low-rank matrices
    0 references
    singular value decomposition
    0 references
    convex optimization
    0 references
    duality in optimization
    0 references
    nuclear norm minimization
    0 references
    random matrices
    0 references
    noncommutative Khintchine inequality
    0 references
    decoupling
    0 references
    compressed sensing
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references