Riemannian optimization on tensor products of Grassmann manifolds: applications to generalized Rayleigh-quotients (Q2903120)

From MaRDI portal





scientific article; zbMATH DE number 6070725
Language Label Description Also known as
default for all languages
No label defined
    English
    Riemannian optimization on tensor products of Grassmann manifolds: applications to generalized Rayleigh-quotients
    scientific article; zbMATH DE number 6070725

      Statements

      0 references
      0 references
      0 references
      23 August 2012
      0 references
      Riemannian optimization
      0 references
      Grassmann manifold
      0 references
      best approximation of tensors
      0 references
      Newton method
      0 references
      conjugate gradient method
      0 references
      Rayleigh quotient
      0 references
      signal processing
      0 references
      data compression
      0 references
      quantum computing
      0 references
      image processing
      0 references
      sorting
      0 references
      numerical experiments
      0 references
      large scale problems
      0 references
      Riemannian optimization on tensor products of Grassmann manifolds: applications to generalized Rayleigh-quotients (English)
      0 references
      The authors consider a class of constrained Riemannian optimization problem by introducing a generalized Rayleigh quotient on the direct product of Grasmannian manifolds. These kind of optimization problems arise in various application areas such as low-rank tensor approximations in statistics, signal processing, and data compression; geometric measures of pure state entanglement from quantum computing; subspace reconstruction problems from image processing; sorting tasks from combinatorics.NEWLINENEWLINEThey give characterization of the critical points of the Rayleigh quotient and non-degeneracy conditions for the Hessians. They introduce Newton-like and conjugate gradient methods for optimization of high dimensional tensors. Based on numerical experiments, the conjugate gradient method turns to be a better candidate for this kind of large scale problems with low computation cost and fast computational time.
      0 references

      Identifiers

      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references