Communication Lower Bounds and Optimal Algorithms for Multiple Tensor-Times-Matrix Computation
DOI10.1137/22M1510443arXiv2207.10437WikidataQ128541178 ScholiaQ128541178MaRDI QIDQ6154935FDOQ6154935
Authors: Hussam al Daas, Grey Ballard, Laura Grigori
Publication date: 16 February 2024
Published in: SIAM Journal on Matrix Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2207.10437
Analysis of algorithms and problem complexity (68Q25) Analysis of algorithms (68W40) Computational difficulty of problems (lower bounds, completeness, difficulty of approximation, etc.) (68Q17) Parallel algorithms in computer science (68W10) Distributed algorithms (68W15)
Cites Work
- TuckerMPI: a parallel C++/MPI software package for large-scale data compression via the Tucker tensor decomposition
- Tensor Decompositions and Applications
- A Multilinear Singular Value Decomposition
- An inequality related to the isoperimetric inequality
- Finite bounds for Hölder-Brascamp-Lieb multilinear inequalities
- Accelerating alternating least squares for tensor decomposition by pairwise perturbation
- Communication lower bounds for distributed-memory matrix multiplication
- Communication lower bounds and optimal algorithms for numerical linear algebra
- Randomized Algorithms for Low-Rank Tensor Decompositions in the Tucker Format
- Low-rank Tucker approximation of a tensor from streaming data
- Communication lower bounds of bilinear algorithms for symmetric tensor contractions
This page was built for publication: Communication Lower Bounds and Optimal Algorithms for Multiple Tensor-Times-Matrix Computation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6154935)