Tensor neural network models for tensor singular value decompositions
DOI10.1007/s10589-020-00167-1zbMath1441.65040OpenAlexW3003125580WikidataQ126328275 ScholiaQ126328275MaRDI QIDQ2307707
Mao-Lin Che, Yi-Min Wei, Xue-Zhong Wang
Publication date: 25 March 2020
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-020-00167-1
singular value decompositionasymptotic stabilitytensor decompositiontensor singular value decompositiontensor neural networks
Numerical computation of eigenvalues and eigenvectors of matrices (65F15) Eigenvalues, singular values, and eigenvectors (15A18) Iterative numerical methods for linear systems (65F10) Multilinear algebra, tensor calculus (15A69)
Related Items (16)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Tensor Decompositions and Applications
- Tensor-Train Decomposition
- Factorization strategies for third-order tensors
- Differential-geometric Newton method for the best rank-\((R _{1}, R _{2}, R _{3})\) approximation of tensors
- Report on test matrices for generalized inverses
- Numerical computation of an analytic singular value decomposition of a matrix valued function
- Differential equations for the analytic singular value decomposition of a matrix
- Independent component analysis, a new concept?
- Neurodynamical optimization
- Krylov-type methods for tensor computations.I
- Two finite-time convergent Zhang neural network models for time-varying complex matrix Drazin inverse
- Generalized tensor function via the tensor singular value decomposition based on the T-product
- Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of ``Eckart-Young decomposition
- Randomized algorithms for the approximations of Tucker and the tensor train decompositions
- Dynamical Approximation by Hierarchical Tucker and Tensor-Train Tensors
- A literature survey of low-rank tensor approximation techniques
- Wedderburn Rank Reduction and Krylov Subspace Method for Tensor Approximation. Part 1: Tucker Case
- Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions
- Best Low Multilinear Rank Approximation of Higher-Order Tensors, Based on the Riemannian Trust-Region Scheme
- Hierarchical Singular Value Decomposition of Tensors
- Dynamical Tensor Approximation
- Tensor Spaces and Numerical Tensor Calculus
- A Newton–Grassmann Method for Computing the Best Multilinear Rank-$(r_1,$ $r_2,$ $r_3)$ Approximation of a Tensor
- On Smooth Decompositions of Matrices
- A Multilinear Singular Value Decomposition
- On the Best Rank-1 and Rank-(R1 ,R2 ,. . .,RN) Approximation of Higher-Order Tensors
- Exact Tensor Completion Using t-SVD
- Quasi-Newton Methods on Grassmannians and Multilinear Approximations of Tensors
- Modified gradient dynamic approach to the tensor complementarity problem
- Third-Order Tensors as Operators on Matrices: A Theoretical and Computational Framework with Applications in Imaging
- Symmetric Tensors and Symmetric Tensor Rank
- Dynamical Low‐Rank Approximation
- Third-order tensors as linear operators on a space of matrices
This page was built for publication: Tensor neural network models for tensor singular value decompositions