Low-rank approximation and completion of positive tensors (Q2827064)

From MaRDI portal





scientific article; zbMATH DE number 6637583
Language Label Description Also known as
default for all languages
No label defined
    English
    Low-rank approximation and completion of positive tensors
    scientific article; zbMATH DE number 6637583

      Statements

      0 references
      12 October 2016
      0 references
      tensor completion
      0 references
      tensor approximation
      0 references
      categorical regression
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      Low-rank approximation and completion of positive tensors (English)
      0 references
      This paper defines a hierarchical decomposition of positive tensors using a convex optimization. This decomposition is shown to exist, be numerically well-posed, and coincides with the usual tensor CP decomposition [\textit{T. G. Kolda} and \textit{B. W. Bader}, SIAM Rev. 51, No. 3, 455--500 (2009; Zbl 1173.65029)] in specific cases. A randomized algorithm that computes the hierarchical decomposition and, subsequently, the best rank-1 approximation of positive tensors in polynomial time depends on the degrees-of-freedom of the tensor rather than on the potentially exponentially larger total number of tensor entries.NEWLINENEWLINEThis algorithm is then applied on the problem of tensor completion [\textit{S. Gandy} et al., Inverse Probl. 27, No. 2, Article ID 025010, 19 p. (2011; Zbl 1211.15036)], in which noisy tensors entries are observed and used to estimate missing entries. A standard approach to this problem requires the exponential (in tensor order) number of measurements. A new algorithm requiring only a polynomial number of measurements for specific cases is provided. In the case of a rank-1 tensor, the number of needed measurements \(O((rp^2)^{1+\zeta})\), for any \(\zeta > 0\), is a quadratic factor away from the information-theoretic lower bound of \(O(rp)\). Note that sparsity in the tensor leads to further algorithm improvement.NEWLINENEWLINEThe paper concludes with numerical examples using synthetic data as well as data from bioengineered metabolic network showing that this new approach outperforms other tensor completion algorithms.
      0 references

      Identifiers

      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references