Local convergence of alternating low‐rank optimization methods with overrelaxation
From MaRDI portal
Publication:6133038
Abstract: The local convergence of alternating optimization methods with overrelaxation for low-rank matrix and tensor problems is established. The analysis is based on the linearization of the method which takes the form of an SOR iteration for a positive semidefinite Hessian and can be studied in the corresponding quotient geometry of equivalent low-rank representations. In the matrix case, the optimal relaxation parameter for accelerating the local convergence can be determined from the convergence rate of the standard method. This result relies on a version of Young's SOR theorem for positive semidefinite block systems.
Recommendations
- Alternating least squares as moving subspace correction
- Local convergence of the alternating least squares algorithm for canonical tensor approximation
- Linear convergence of an alternating polar decomposition method for low rank orthogonal tensor approximations
- Provable accelerated gradient method for nonconvex low rank optimization
- On local convergence of alternating schemes for optimization of convex problems in the tensor train format
This page was built for publication: Local convergence of alternating low‐rank optimization methods with overrelaxation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6133038)