Fundamental tensor operations for large-scale data analysis using tensor network formats
From MaRDI portal
(Redirected from Publication:784596)
Abstract: We discuss extended definitions of linear and multilinear operations such as Kronecker, Hadamard, and contracted products, and establish links between them for tensor calculus. Then we introduce effective low-rank tensor approximation techniques including Candecomp/Parafac (CP), Tucker, and tensor train (TT) decompositions with a number of mathematical and graphical representations. We also provide a brief review of mathematical properties of the TT decomposition as a low-rank approximation technique. With the aim of breaking the curse-of-dimensionality in large-scale numerical analysis, we describe basic operations on large-scale vectors, matrices, and high-order tensors represented by TT decomposition. The proposed representations can be used for describing numerical methods based on TT decomposition for solving large-scale optimization problems such as systems of linear equations and symmetric eigenvalue problems.
Recommendations
- Tensor networks for dimensionality reduction and large-scale optimization. I: Low-rank tensor decompositions
- Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives
- Numerical tensor calculus
- A literature survey of low-rank tensor approximation techniques
- Parallel Algorithms for Tensor Train Arithmetic
Cites work
- scientific article; zbMATH DE number 3321507 (Why is no real title available?)
- A Multilinear Singular Value Decomposition
- A literature survey of low-rank tensor approximation techniques
- A new scheme for the tensor representation
- A note on tensor chain approximation
- Alternating minimal energy methods for linear systems in higher dimensions
- Approximation of \(2^d\times2^d\) matrices using tensor decomposition
- Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions
- Computation of extreme eigenvalues in higher dimensions using block tensor train format
- Estimating a few extreme singular values and vectors for large-scale matrices in tensor train format
- Hierarchical Singular Value Decomposition of Tensors
- Ideal spatial adaptation by wavelet shrinkage
- Low-Rank Explicit QTT Representation of the Laplace Operator and Its Inverse
- Low-rank tensor methods with subspace correction for symmetric eigenvalue problems
- Multilevel Toeplitz matrices generated by tensor-structured vectors and convolution with logarithmic complexity
- On manifolds of tensors of fixed TT-rank
- On minimal subspaces in tensor representations
- Optimization problems in contracted tensor networks
- Regularized computation of approximate pseudoinverse of large matrices using low-rank tensor train decompositions
- Solution of Linear Systems and Matrix Inversion in the TT-Format
- Sparse grids
- Tensor Decompositions and Applications
- Tensor spaces and numerical tensor calculus
- Tensor-train decomposition
- The alternating linear scheme for tensor optimization in the tensor train format
- The density-matrix renormalization group in the age of matrix product states
- The strong Kronecker product
- \(O(d \log N)\)-quantics approximation of \(N\)-\(d\) tensors in high-dimensional numerical modeling
Cited in
(26)- Estimating a few extreme singular values and vectors for large-scale matrices in tensor train format
- User-defined tensor data analysis
- An improved quantum network communication model based on compressed tensor network states
- Nonnegative tensor patch dictionary approaches for image compression and deblurring applications
- Bayesian Dynamic Tensor Regression
- Regularized computation of approximate pseudoinverse of large matrices using low-rank tensor train decompositions
- Constrained tensorial total variation problem based on an alternating conditional gradient algorithm
- The tensor network theory library
- scientific article; zbMATH DE number 7404606 (Why is no real title available?)
- On computing high-dimensional Riemann theta functions
- Tensor-train format solution with preconditioned iterative method for high dimensional time-dependent space-fractional diffusion equations with error analysis
- Low-Rank Explicit QTT Representation of the Laplace Operator and Its Inverse
- Tensor spaces and numerical tensor calculus
- On manifolds of tensors of fixed TT-rank
- Tensor train-Karhunen-Loève expansion: new theoretical and algorithmic frameworks for representing general non-Gaussian random fields
- Global and extended global Hessenberg processes for solving Sylvester tensor equation with low-rank right-hand side
- Krylov subspace projection method for Sylvester tensor equation with low rank right-hand side
- The LSQR method for solving tensor least-squares problems
- Large scale tensor analysis by computer
- Noniterative tensor network‐based algorithm for Volterra system identification
- On the complexity of finding tensor ranks
- Extended Krylov subspace methods for solving Sylvester and Stein tensor equations
- Tensor networks for dimensionality reduction and large-scale optimization. I: Low-rank tensor decompositions
- An accelerated tensorial double proximal gradient method for total variation regularization problem
- Block tensor train decomposition for missing data estimation
- Bayesian variable selection for matrix autoregressive models
This page was built for publication: Fundamental tensor operations for large-scale data analysis using tensor network formats
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q784596)