Parallel Algorithms for Tensor Train Arithmetic

From MaRDI portal
Publication:5028405

DOI10.1137/20M1387158zbMATH Open1484.65088arXiv2011.06532OpenAlexW4210523970WikidataQ115214713 ScholiaQ115214713MaRDI QIDQ5028405FDOQ5028405


Authors: Hussam al Daas, Grey Ballard, P. Benner Edit this on Wikidata


Publication date: 9 February 2022

Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)

Abstract: We present efficient and scalable parallel algorithms for performing mathematical operations for low-rank tensors represented in the tensor train (TT) format. We consider algorithms for addition, elementwise multiplication, computing norms and inner products, orthogonalization, and rounding (rank truncation). These are the kernel operations for applications such as iterative Krylov solvers that exploit the TT structure. The parallel algorithms are designed for distributed-memory computation, and we use a data distribution and strategy that parallelizes computations for individual cores within the TT format. We analyze the computation and communication costs of the proposed algorithms to show their scalability, and we present numerical experiments that demonstrate their efficiency on both shared-memory and distributed-memory parallel systems. For example, we observe better single-core performance than the existing MATLAB TT-Toolbox in rounding a 2GB TT tensor, and our implementation achieves a 34imes speedup using all 40 cores of a single node. We also show nearly linear parallel scaling on larger TT tensors up to over 10,000 cores for all mathematical operations.


Full work available at URL: https://arxiv.org/abs/2011.06532




Recommendations




Cites Work


Cited In (15)

Uses Software





This page was built for publication: Parallel Algorithms for Tensor Train Arithmetic

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5028405)