Stochastic gradients for large-scale tensor decomposition

From MaRDI portal
Publication:5037555

DOI10.1137/19M1266265zbMATH Open1485.65054arXiv1906.01687OpenAlexW3096024647MaRDI QIDQ5037555FDOQ5037555


Authors: David Hong, Tamara G. Kolda Edit this on Wikidata


Publication date: 1 March 2022

Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)

Abstract: Tensor decomposition is a well-known tool for multiway data analysis. This work proposes using stochastic gradients for efficient generalized canonical polyadic (GCP) tensor decomposition of large-scale tensors. GCP tensor decomposition is a recently proposed version of tensor decomposition that allows for a variety of loss functions such as Bernoulli loss for binary data or Huber loss for robust estimation. The stochastic gradient is formed from randomly sampled elements of the tensor and is efficient because it can be computed using the sparse matricized-tensor-times-Khatri-Rao product (MTTKRP) tensor kernel. For dense tensors, we simply use uniform sampling. For sparse tensors, we propose two types of stratified sampling that give precedence to sampling nonzeros. Numerical results demonstrate the advantages of the proposed approach and its scalability to large-scale problems.


Full work available at URL: https://arxiv.org/abs/1906.01687




Recommendations




Cites Work


Cited In (13)

Uses Software





This page was built for publication: Stochastic gradients for large-scale tensor decomposition

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5037555)