Estimation of low-rank matrices via approximate message passing
From MaRDI portal
Publication:2656598
Abstract: Consider the problem of estimating a low-rank matrix when its entries are perturbed by Gaussian noise. If the empirical distribution of the entries of the spikes is known, optimal estimators that exploit this knowledge can substantially outperform simple spectral approaches. Recent work characterizes the asymptotic accuracy of Bayes-optimal estimators in the high-dimensional limit. In this paper we present a practical algorithm that can achieve Bayes-optimal accuracy above the spectral threshold. A bold conjecture from statistical physics posits that no polynomial-time algorithm achieves optimal error below the same threshold (unless the best estimator is trivial). Our approach uses Approximate Message Passing (AMP) in conjunction with a spectral initialization. AMP algorithms have proved successful in a variety of statistical estimation tasks, and are amenable to exact asymptotic analysis via state evolution. Unfortunately, state evolution is uninformative when the algorithm is initialized near an unstable fixed point, as often happens in low-rank matrix estimation. We develop a new analysis of AMP that allows for spectral initializations. Our main theorem is general and applies beyond matrix estimation. However, we use it to derive detailed predictions for the problem of estimating a rank-one matrix in noise. Special cases of this problem are closely related---via universality arguments---to the network community detection problem for two asymmetric communities. For general rank-one models, we show that AMP can be used to construct confidence intervals and control false discovery rate. We provide illustrations of the general methodology by considering the cases of sparse low-rank matrices and of block-constant low-rank matrices with symmetric blocks (we refer to the latter as to the `Gaussian Block Model').
Recommendations
- Constrained low-rank matrix estimation: phase transitions, approximate message passing and applications
- Fundamental limits of symmetric low-rank matrix estimation
- Iterative reconstruction of rank-one matrices in noise
- Universality of approximate message passing algorithms
- Approximate message passing algorithms for rotationally invariant matrices
Cites work
- scientific article; zbMATH DE number 720689 (Why is no real title available?)
- scientific article; zbMATH DE number 5204610 (Why is no real title available?)
- A Direct Approach to False Discovery Rates
- Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
- An iterative construction of solutions of the TAP equations for the Sherrington-Kirkpatrick model
- Asymptotic mutual information for the balanced binary stochastic block model
- Asymptotics of sample eigenstructure for a large dimensional spiked covariance model
- Bilinear Generalized Approximate Message Passing—Part I: Derivation
- Community detection and stochastic block models: recent developments
- Constrained low-rank matrix estimation: phase transitions, approximate message passing and applications
- Eigenvalues of large sample covariance matrices of spiked population models
- Finding hidden cliques of size \(\sqrt{N/e}\) in nearly linear time
- Fundamental limits of symmetric low-rank matrix estimation
- Generalized power method for sparse principal component analysis
- Iterative reconstruction of rank-one matrices in noise
- Large-scale inference. Empirical Bayes methods for estimation, testing, and prediction
- Learning Theory
- Learning the parts of objects by non-negative matrix factorization
- Markov chain Monte Carlo. Stochastic simulation for Bayesian inference.
- Minimax estimation via wavelet shrinkage
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Mutual Information and Minimum Mean-Square Error in Gaussian Channels
- Non-Negative Principal Component Analysis: Message Passing Algorithms and Sharp Asymptotics
- On consistency and sparsity for principal components analysis in high dimensions
- On the distribution of the largest eigenvalue in principal components analysis
- Optimal Transport
- Phase Transitions and Sample Complexity in Bayes-Optimal Matrix Factorization
- Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices
- Phase transitions in semidefinite relaxations
- Sparse principal component analysis and iterative thresholding
- Spectral analysis of large dimensional random matrices
- State evolution for approximate message passing with non-separable functions
- State evolution for general approximate message passing algorithms, with applications to spatial coupling
- The Computer Science and Physics of Community Detection: Landscapes, Phase Transitions, and Hardness
- The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing
- The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices
- The isotropic semicircle law and deformation of Wigner matrices
- The largest eigenvalue of rank one deformation of large Wigner matrices
- The largest eigenvalues of finite rank deformation of large Wigner matrices: Convergence and nonuniversality of the fluctuations
- The projected power method: an efficient algorithm for joint alignment from pairwise differences
- The singular values and vectors of low rank perturbations of large rectangular random matrices
- Truncated power method for sparse eigenvalue problems
- Universality in polytope phase transitions and message passing algorithms
Cited in
(28)- A Friendly Tutorial on Mean-Field Spin Glass Techniques for Non-Physicists
- Approximate message passing with spectral initialization for generalized linear models*
- Constrained low-rank matrix estimation: phase transitions, approximate message passing and applications
- A Unifying Tutorial on Approximate Message Passing
- Approximate message passing algorithms for rotationally invariant matrices
- State evolution for approximate message passing with non-separable functions
- Von Neumann entropy penalization and low-rank matrix estimation
- Fundamental limits of symmetric low-rank matrix estimation
- Fundamental limits of weak recovery with applications to phase retrieval
- Approximate Message Passing With Consistent Parameter Estimation and Applications to Sparse Learning
- Iterative reconstruction of rank-one matrices in noise
- Computational barriers to estimation from low-degree polynomials
- Universality of approximate message passing algorithms and tensor networks
- Universality of approximate message passing algorithms
- The decimation scheme for symmetric matrix factorization
- Fundamental limits of low-rank matrix estimation with diverging aspect ratios
- Local convexity of the TAP free energy and AMP convergence for \(\mathbb{Z}_2\)-synchronization
- On convergence of the cavity and Bolthausen's TAP iterations to the local magnetization
- Approximate message passing with rigorous guarantees for pooled data and quantitative group testing
- On the TAP equations via the cavity approach in the generic mixed \(p\)-spin models
- Approximate message passing for orthogonally invariant ensembles: multivariate non-linearities and spectral initialization
- Statistically optimal firstorder algorithms: a proof via orthogonalization
- Submatrix localization via message passing
- Generalized TAP Free Energy
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Estimation of high-dimensional low-rank matrices
- Optimal combination of linear and spectral estimators for generalized linear models
- Universality of approximate message passing with semirandom matrices
This page was built for publication: Estimation of low-rank matrices via approximate message passing
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2656598)