Error bound of critical points and KL property of exponent 1/2 for squared F-norm regularized factorization
From MaRDI portal
Publication:2052408
Abstract: This paper is concerned with the squared F(robenius)-norm regularized factorization form for noisy low-rank matrix recovery problems. Under a suitable assumption on the restricted condition number of the Hessian for the loss function, we derive an error bound to the true matrix for the non-strict critical points with rank not more than that of the true matrix. Then, for the squared F-norm regularized factorized least squares loss function, under the noisy and full sample setting we establish its KL property of exponent on its global minimizer set, and under the noisy and partial sample setting achieve this property for a class of critical points. These theoretical findings are also confirmed by solving the squared F-norm regularized factorization problem with an accelerated alternating minimization method.
Recommendations
- KL property of exponent \(1/2\) of \(\ell_{2,0}\)-norm and DC regularized factorizations for low-rank matrix recovery
- Publication:4945779
- Error bound for product quadrature rules in \(L_ 1\)-weighted norm
- Error bounds, quadratic growth, and linear convergence of proximal methods
- Fundamental error estimate inequalities for the Tikhonov regularization using reproducing kernels
- The \({L^2}\) norm error estimate for linear element and its applications
- Optimal error bounds for non-expansive fixed-point iterations in normed spaces
- Error estimates for the regularization of least squares problems
- Error bounds and singularity degree in semidefinite programming
- Sharp 2-norm error bounds for LSQR and the conjugate gradient method
Cites work
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion
- A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization
- An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems
- CoCoLasso for high-dimensional error-in-variables regression
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Exact matrix completion via convex optimization
- Finding low-rank solutions via nonconvex matrix factorization, efficiently and provably
- First-order methods almost always avoid strict saddle points
- Global Optimality in Low-Rank Matrix Optimization
- Guaranteed Matrix Completion via Non-Convex Factorization
- Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization
- Log-concavity and strong log-concavity: a review
- Low-rank matrix completion using alternating minimization
- Matrix completion and low-rank SVD via fast alternating least squares
- Noisy matrix completion: understanding statistical guarantees for convex relaxation via nonconvex optimization
- Optimal rates of convergence for covariance matrix estimation
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Restricted strong convexity and weighted matrix completion: optimal bounds with noise
- Symmetry, Saddle Points, and Global Optimization Landscape of Nonconvex Matrix Factorization
- The Global Optimization Geometry of Low-Rank Matrix Optimization
- The non-convex geometry of low-rank matrix optimization
- Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements
- Variational Analysis
Cited in
(2)
This page was built for publication: Error bound of critical points and KL property of exponent 1/2 for squared F-norm regularized factorization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2052408)