Nonsmooth Rate-of-Convergence Analyses of Algorithms for Eigenvalue Optimization
From MaRDI portal
Publication:6301468
arXiv1805.04393MaRDI QIDQ6301468FDOQ6301468
Publication date: 11 May 2018
Abstract: Non-smoothness at optimal points is a common phenomenon in many eigenvalue optimization problems. We consider two recent algorithms to minimize the largest eigenvalue of a Hermitian matrix dependent on one parameter, both proven to be globally convergent unaffected by non-smoothness. One of these models the eigenvalue function with a piece-wise quadratic function, and effective in dealing with non-convex problems. The other projects the Hermitian matrix into subspaces formed of eigenvectors, and effective in dealing with large-scale problems. We generalize the latter slightly to cope with non-smoothness. For both algorithms, we analyze the rate-of-convergence in the non-smooth setting, when the largest eigenvalue is multiple at the minimizer and zero is strictly in the interior of the generalized Clarke derivative, and prove that both algorithms converge rapidly. The algorithms are applied to, and the deduced results are illustrated on the computation of the inner numerical radius, the modulus of the point on the boundary of the field of values closest to the origin, which carries significance for instance for the numerical solution of a definite generalized symmetric eigenvalue problem.
Large-scale problems in mathematical programming (90C06) Numerical computation of eigenvalues and eigenvectors of matrices (65F15) Nonconvex programming, global optimization (90C26) Norms of matrices, numerical range, applications of functional analysis to matrix theory (15A60)
This page was built for publication: Nonsmooth Rate-of-Convergence Analyses of Algorithms for Eigenvalue Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6301468)