Implicit Regularization and Entrywise Convergence of Riemannian Optimization for Low Tucker-Rank Tensor Completion

From MaRDI portal
Publication:6375474

arXiv2108.07899MaRDI QIDQ6375474FDOQ6375474


Authors: Haifeng Wang, Jinchi Chen, Ke Wei Edit this on Wikidata


Publication date: 17 August 2021

Abstract: This paper is concerned with the low Tucker-rank tensor completion problem, which is about reconstructing a tensor mathcalTinmathbbRnimesnimesn of low multilinear rank from partially observed entries. We consider a manifold algorithm (i.e., Riemannian gradient method) for this problem and reveal an appealing implicit regularization phenomenon of non-convex optimization in low Tucker-rank tensor completion. More precisely, it has been rigorously proved that the iterates of the Riemannian gradient method stay in an incoherent region throughout all iterations provided the number of observed entries is essentially in the order of O(n3/2). To the best of our knowledge, this is the first work that has shown the implicit regularization property of a non-convex method for low Tucker-rank tensor completion under the nearly optimal sampling complexity. Additionally, the entrywise convergence of the method is further established. The analysis relies on the leave-one-out technique and the subspace projection structure within the algorithm. Some of technical results developed in the paper might be of broader interest in investigating the properties of other non-convex algorithms.













This page was built for publication: Implicit Regularization and Entrywise Convergence of Riemannian Optimization for Low Tucker-Rank Tensor Completion

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6375474)