Nearest-neighbor interaction systems in the tensor-train format
From MaRDI portal
Publication:1686581
Abstract: Low-rank tensor approximation approaches have become an important tool in the scientific computing community. The aim is to enable the simulation and analysis of high-dimensional problems which cannot be solved using conventional methods anymore due to the so-called curse of dimensionality. This requires techniques to handle linear operators defined on extremely large state spaces and to solve the resulting systems of linear equations or eigenvalue problems. In this paper, we present a systematic tensor-train decomposition for nearest-neighbor interaction systems which is applicable to a host of different problems. With the aid of this decomposition, it is possible to reduce the memory consumption as well as the computational costs significantly. Furthermore, it can be shown that in some cases the rank of the tensor decomposition does not depend on the network size. The format is thus feasible even for high-dimensional systems. We will illustrate the results with several guiding examples such as the Ising model, a system of coupled oscillators, and a CO oxidation model.
Recommendations
- Low-rank representation of tensor network operators with long-range pairwise interactions
- Dynamical approximation by hierarchical Tucker and tensor-train tensors
- Tensor Train Neighborhood Preserving Embedding
- Tensors in Modelling Multi-particle Interactions
- TeNeS: tensor network solver for quantum lattice systems
- Range-separated tensor format for many-particle modeling
- The space of interactions in neural network models
- \(O(N)\) random tensor models
- Tensor networks from kinematic space
- Density Matrix and Tensor Network Renormalization
Cites work
- scientific article; zbMATH DE number 3806623 (Why is no real title available?)
- A dynamical low-rank approach to the chemical master equation
- A new scheme for the tensor representation
- A new tensor decomposition
- A solver for the stochastic master equation applied to gene regulatory networks
- Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of ``Eckart-Young decomposition
- Approximation of \(2^d\times2^d\) matrices using tensor decomposition
- Approximation of matrices with logarithmic number of parameters
- Beitrag zur Theorie des Ferromagnetismus
- Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions
- Dynamical approximation by hierarchical Tucker and tensor-train tensors
- Harmonic oscillators coupled by springs: Discrete solutions as a Wigner quantum system
- Hierarchical Singular Value Decomposition of Tensors
- Low-Rank Explicit QTT Representation of the Laplace Operator and Its Inverse
- Multivariate regression and machine learning with sums of separable functions
- On manifolds of tensors of fixed TT-rank
- On minimal subspaces in tensor representations
- On the approximation of high-dimensional differential equations in the hierarchical Tucker format
- Simultaneous state-time approximation of the chemical master equation using tensor product formats.
- Solving the master equation without kinetic Monte Carlo: tensor train approximations for a CO oxidation model
- TT-cross approximation for multidimensional arrays
- Tensor Decompositions and Applications
- Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
- Tensor approximation of stationary distributions of chemical reaction networks
- Tensor spaces and numerical tensor calculus
- Tensor-based techniques for the blind separation of DS-CDMA signals
- Tensor-train decomposition
- The Theory of Classical Dynamics
- The alternating linear scheme for tensor optimization in the tensor train format
- \(O(d \log N)\)-quantics approximation of \(N\)-\(d\) tensors in high-dimensional numerical modeling
Cited in
(6)- Embedding stochastic differential equations into neural networks via dual processes
- The tensor Padé-type approximant with application in computing tensor exponential function
- Randomized Algorithms for Rounding in the Tensor-Train Format
- Fredholm integral equations for function approximation and the training of neural networks
- Low-rank representation of tensor network operators with long-range pairwise interactions
- Tensor-based computation of metastable and coherent sets
This page was built for publication: Nearest-neighbor interaction systems in the tensor-train format
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1686581)