Numerical stability and tensor nuclear norm
From MaRDI portal
Publication:6153359
DOI10.1007/S00211-023-01377-5arXiv2207.08769MaRDI QIDQ6153359FDOQ6153359
Authors: Zhen Dai, Lek-Heng Lim
Publication date: 19 March 2024
Published in: Numerische Mathematik (Search for Journal in Brave)
Abstract: We present a notion of bilinear stability, which is to numerical stability what bilinear complexity is to time complexity. In bilinear complexity, an algorithm for evaluating a bilinear operator is a decomposition ; the number of terms captures the speed of the algorithm; and its smallest possible value, i.e., the tensor rank of , quantifies the speed of a fastest algorithm. Bilinear stability introduces norms to the mix: The growth factor of the algorithm captures the accuracy of the algorithm; and its smallest possible value, i.e., the tensor nuclear norm of , quantifies the accuracy of a stablest algorithm. To substantiate this notion, we establish a bound for the forward error in terms of the growth factor and present numerical evidence comparing various fast algorithms for matrix and complex multiplications, showing that larger growth factors correlate with less accurate results. Compared to similar studies of numerical stability, bilinear stability is more general, applying to any bilinear operators and not just matrix or complex multiplications; is more simplistic, bounding forward error in terms of a single (growth) factor; and is truly tensorial like bilinear complexity, invariant under any orthogonal change of coordinates. As an aside, we study a new algorithm for computing complex multiplication in terms of real, much like Gauss's, but is optimally fast and stable in that it attains both tensor rank and nuclear norm.
Full work available at URL: https://arxiv.org/abs/2207.08769
Recommendations
Cites Work
- Functions of Matrices
- Title not available (Why is that?)
- Nuclear norm of higher-order tensors
- Gaussian elimination is not optimal
- Title not available (Why is that?)
- The metric theory of tensor products. Grothendieck's résumé revisited
- Relative bilinear complexity and matrix multiplication.
- Title not available (Why is that?)
- Title not available (Why is that?)
- Accuracy and Stability of Numerical Algorithms
- Tensors in computations
- On multiplication of 2 \(\times\) 2 matrices
- Computing the Polar Decomposition—with Applications
- Title not available (Why is that?)
- On the nuclear norm and the singular value decomposition of tensors
- The nature of computation
- Geometry and complexity theory
- Title not available (Why is that?)
- The border rank of the multiplication of $2\times 2$ matrices is seven
- The matrix sign decomposition and its relation to the polar decomposition
- On Scaling Newton’s Method for Polar Decomposition and the Matrix Sign Function
- Approximate Solutions for the Bilinear Form Computational Problem
- Complex-valued neural networks with multi-valued neurons.
- Title not available (Why is that?)
- Stability of fast algorithms for matrix multiplication
- Fast structured matrix computations: tensor rank and Cohn-Umans method
- Stability of a Method for Multiplying Complex Matrices with Three Real Matrix Multiplications
- Improving the numerical stability of fast matrix multiplication
- Discovering faster matrix multiplication algorithms with reinforcement learning
- Efficient complex matrix multiplication
- Title not available (Why is that?)
Cited In (1)
This page was built for publication: Numerical stability and tensor nuclear norm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6153359)