Best sparse rank-1 approximation to higher-order tensors via a truncated exponential induced regularizer
From MaRDI portal
Publication:2161912
DOI10.1016/j.amc.2022.127433OpenAlexW4288074023WikidataQ114210797 ScholiaQ114210797MaRDI QIDQ2161912
Publication date: 5 August 2022
Published in: Applied Mathematics and Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.amc.2022.127433
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis
- Tensor Decompositions and Applications
- The sparsest solutions to \(Z\)-tensor complementarity problems
- Tensor eigenvalues and their applications
- On the Lambert \(w\) function
- Several approximation algorithms for sparse best rank-1 approximation to higher-order tensors
- A sparse rank-1 approximation algorithm for high-order tensors
- Consistent selection of the number of clusters via crossvalidation
- Newton-based optimization for Kullback–Leibler nonnegative tensor factorizations
- Algorithm 862
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Conditional Gradient Algorithmsfor Rank-One Matrix Approximations with a Sparsity Constraint
- On Tensors, Sparsity, and Nonnegative Factorizations
- Dynamic Tensor Clustering
- Provable Sparse Tensor Decomposition
This page was built for publication: Best sparse rank-1 approximation to higher-order tensors via a truncated exponential induced regularizer