A mixture of nuclear norm and matrix factorization for tensor completion (Q1747028)

From MaRDI portal
scientific article
Language Label Description Also known as
English
A mixture of nuclear norm and matrix factorization for tensor completion
scientific article

    Statements

    A mixture of nuclear norm and matrix factorization for tensor completion (English)
    0 references
    0 references
    0 references
    26 April 2018
    0 references
    The core problem of tensor completion is to estimate missing data via known data; for example, from \(3\)-dimensional scans. Let \(\mathcal{X}\) be a real \(N\)-mode tensor of size \(\ell_{1}\times\ell_{2}\times\cdots \times\ell_{N}\) with entries \(x_{i_{1}i_{2}\dots i_{N}}\) where the index \(i_{k}\) runs over a set \(I_{k}\) of size \(\ell_{k}\). The mode-\(n\) matricization of \(\mathcal{X}\) is the \(\ell_{n}\times {\prod_{k\neq n}} \ell_{k}\) matrix \(X_{(n)}\) whose entries are \(x_{i_{1}i_{2}\dots i_{N}}\) with row indices \(i_{n}\in I_{n}\) and column indices in \( {\prod_{k\neq n}} I_{k}\). The trace norm \(\left\| M\right\| _{\ast}\) of a matrix \(M\) is defined as the sum of the singular values of \(M\) and \(\left\| M\right\| _{F}\) denotes the Frobenius norm. Consider a fixed set of weights \(\alpha_{k}\) (\(\alpha_{k}\geq0\) and \(\sum_{k}\alpha_{k}=1\)). The nuclear norm of the tensor \(\mathcal{X}\) is defined in [\textit{J. Liu} et al, ``Tensor completion for estimating missing values in visual data'', IEEE Trans. Pattern Anal. Mach. Intell. 35, 1126--1153 (2013)] to be \(\left\| \mathcal{X}\right\| _{\ast }:=\sum_{n}\alpha_{n}\left\| X_{(n)}\right\| _{\ast}\). For any subset \(\Omega\) of the index set for \(\mathcal{X}\) (``the observed entries''), the tensor \(P_{\Omega}(\mathcal{X})\) is obtained from \(\mathcal{X}\) by setting the entries not indexed by \(\Omega\) equal to \(0\). Now suppose that it is assumed that \(X_{(n)}\) can be approximated by a matrix of rank \(r_{n}\) (\(n=1,2,\dots,N\)). The authors propose estimating a tensor \(\mathcal{T}\) (whose missing values are replaced by zeros) by a tensor \(\mathcal{X}\) such that \(f_{A}(\mathcal{X)+}f_{B}(\mathcal{X},X,Y)\) is minimized over \(A,B,\mathcal{X},X_{n},Y_{n}\) (\(n=1,2,\dots,N\)) where: (i) \(A\cup B\) is a partition of \(\{1,2,\dots,N\}\); (ii) \(X_{n}^{T}\) and \(Y_{n}\) are matrices with \(r_{n}\) rows, so that \(X_{n}Y_{n}\) is an approximation of the matrix \(X_{(n)}\) of rank at most \(r_{n}\); (iii) \(P_{\Omega}(\mathcal{X)=T}\); and (iv) \(f_{A}(\mathcal{X}):=\sum_{n\in A}\alpha_{n}\left\| X_{(n)}\right\| _{\ast}\) and \(f_{B}(\mathcal{X},X,Y):=\sum_{n\in B}\alpha_{n}\left\| X_{(n)}-X_{n}Y_{n}\right\| _{F}^{2}\). Earlier authors have used only a single norm, taking either \(A\) or \(B\) equal to \(\emptyset\); see, for example, the reference above and [\textit{Y. Xu} et al., Inverse Probl. Imaging 9, No. 2, 601--624 (2015; Zbl 1359.15021)]. The authors argue that, since \(f_{B}\) is smooth whilst \(f_{A}\) is nonsmooth, their model allows better distinction of discontinuities in the observed data. Some computational evidence is provided to support this.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    tensor completion
    0 references
    nuclear norm
    0 references
    0 references
    0 references
    0 references
    0 references