Block tensor train decomposition for missing data estimation
From MaRDI portal
Publication:1757240
DOI10.1007/s00362-018-1043-8zbMath1409.65026OpenAlexW2890153472WikidataQ129326587 ScholiaQ129326587MaRDI QIDQ1757240
Publication date: 3 January 2019
Published in: Statistical Papers (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00362-018-1043-8
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Tensor Decompositions and Applications
- Tensor-Train Decomposition
- Computation of extreme eigenvalues in higher dimensions using block tensor train format
- On tensor completion via nuclear norm minimization
- Tucker factorization with missing data with application to low-\(n\)-rank tensor completion
- Low-rank tensor completion by Riemannian optimization
- \(O(d \log N)\)-quantics approximation of \(N\)-\(d\) tensors in high-dimensional numerical modeling
- Fundamental tensor operations for large-scale data analysis using tensor network formats
- Weighted least squares fitting using ordinary least squares algorithms
- Principal component analysis.
- On minimal subspaces in tensor representations
- A new scheme for the tensor representation
- On manifolds of tensors of fixed TT-rank
- Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of ``Eckart-Young decomposition
- Exact matrix completion via convex optimization
- Regularized Computation of Approximate Pseudoinverse of Large Matrices Using Low-Rank Tensor Train Decompositions
- Riemannian Optimization for High-Dimensional Tensor Completion
- A literature survey of low-rank tensor approximation techniques
- The Alternating Linear Scheme for Tensor Optimization in the Tensor Train Format
- Alternating Minimal Energy Methods for Linear Systems in Higher Dimensions
- Low-Rank Tensor Methods with Subspace Correction for Symmetric Eigenvalue Problems
- Numerical Methods for Large Eigenvalue Problems
- Hierarchical Singular Value Decomposition of Tensors
- Tensor completion and low-n-rank tensor recovery via convex optimization
- Tensor conjugate-gradient-type method for Rayleigh quotient minimization in block QTT-format
- Tensor Spaces and Numerical Tensor Calculus
- Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions
- Low-Rank Matrix Approximation with Weights or Missing Data Is NP-Hard
- Variants of Alternating Least Squares Tensor Completion in the Tensor Train Format
- Tensor Completion in Hierarchical Tensor Representations
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Lower Rank Approximation of Matrices by Least Squares with Any Choice of Weights
- Computing Low-Rank Approximations of Large-Scale Matrices with the Tensor Network Randomized SVD
- Nuclear norm of higher-order tensors
- Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train
- Smooth PARAFAC Decomposition for Tensor Completion
- Solution of Linear Systems and Matrix Inversion in the TT-Format
- Fast Solution of Parabolic Problems in the Tensor Train/Quantized Tensor Train Format with Initial Application to the Fokker--Planck Equation
- Recommender Systems Handbook
- Estimating a Few Extreme Singular Values and Vectors for Large-Scale Matrices in Tensor Train Format
- The Power of Convex Relaxation: Near-Optimal Matrix Completion