SOTT: greedy approximation of a tensor as a sum of tensor trains
DOI10.1137/20M1381472zbMATH Open1492.65108OpenAlexW3170050064WikidataQ114074131 ScholiaQ114074131MaRDI QIDQ5065504FDOQ5065504
Authors: Maria Fuente Ruiz, Damiano Lombardi, Virginie Ehrlacher
Publication date: 22 March 2022
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/20m1381472
Recommendations
- Streaming Tensor Train Approximation
- The alternating linear scheme for tensor optimization in the tensor train format
- A regularized Newton method for the efficient approximation of tensors represented in the canonical tensor format
- Nonnegative tensor train factorization with DMRG technique
- Greedy low-rank approximation in Tucker format of solutions of tensor linear systems
Multilinear algebra, tensor calculus (15A69) Numerical methods for low-rank matrix approximation; matrix compression (65F55)
Cites Work
- Tensor Decompositions and Applications
- Tensor-train decomposition
- Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions
- Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
- Algorithms for Numerical Analysis in High Dimensions
- A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data
- Tensor numerical methods in scientific computing
- Tensor spaces and numerical tensor calculus
- Local convergence of the alternating least squares algorithm for canonical tensor approximation
- The LATIN multiscale computational method and the proper generalized decomposition
- Greedy algorithms and \(M\)-term approximation with regard to redundant dictionaries
- A literature survey of low-rank tensor approximation techniques
- Canonical polyadic decomposition of third-order tensors: reduction to generalized eigenvalue decomposition
- The number of singular vector tuples and uniqueness of best rank-one approximation of tensors
- Low rank Tucker-type tensor approximation to classical potentials
- Rank-one approximation to high order tensors
- On local convergence of alternating schemes for optimization of convex problems in the tensor train format
- Multigrid accelerated tensor approximation of function related multidimensional arrays
- Optimization-based algorithms for tensor decompositions: canonical polyadic decomposition, decomposition in rank-\((L_r,L_r,1)\) terms, and a new generalization
- On accelerating the regularized alternating least-squares algorithm for tensors
- On best rank one approximation of tensors
- Enhanced Line Search: A Novel Method to Accelerate PARAFAC
- Spectral tensor-train decomposition
- A constructive algorithm for decomposing a tensor into a finite sum of orthonormal rank-1 terms
- Tensor networks for dimensionality reduction and large-scale optimization. I: Low-rank tensor decompositions
- Alternating least squares as moving subspace correction
- On Uniqueness and Computation of the Decomposition of a Tensor into Multilinear Rank-$(1,L_r,L_r)$ Terms
- Resonant damping of flexible structures under random excitation
Cited In (13)
- The alternating linear scheme for tensor optimization in the tensor train format
- Higher-order principal component analysis for the approximation of tensors in tree-based low-rank formats
- Quantized CP approximation and sparse tensor interpolation of function-generated data.
- The condition number of many tensor decompositions is invariant under Tucker compression
- Performance of the low-rank TT-SVD for large dense tensors on modern multicore CPUs
- Greedy low-rank approximation in Tucker format of solutions of tensor linear systems
- Streaming Tensor Train Approximation
- Tensor train construction from tensor actions, with application to compression of large high order derivative tensors
- High-order tensor estimation via trains of coupled third-order CP and Tucker decompositions
- Black Box Approximation in the Tensor Train Format Initialized by ANOVA Decomposition
- Adaptive hierarchical subtensor partitioning for tensor compression
- Nonnegative tensor train factorization with DMRG technique
- A new scheme for the tensor representation
Uses Software
This page was built for publication: SOTT: greedy approximation of a tensor as a sum of tensor trains
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5065504)