Bilinear Generalized Approximate Message Passing—Part I: Derivation
From MaRDI portal
Publication:4579568
DOI10.1109/TSP.2014.2357776zbMATH Open1394.94447arXiv1310.2632OpenAlexW1510028821MaRDI QIDQ4579568FDOQ4579568
Authors: Jason Parker, Philip Schniter, Volkan Cevher
Publication date: 22 August 2018
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Abstract: We extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. In the first part of the paper, we derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central-limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assumed priors, and two rank-selection strategies. In the second part of the paper, we discuss the specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem. Our numerical results, using both synthetic and real-world datasets, demonstrate that EM-BiG-AMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters.
Full work available at URL: https://arxiv.org/abs/1310.2632
Recommendations
- Bilinear Generalized Approximate Message Passing—Part II: Applications
- Multi-Layer Bilinear Generalized Approximate Message Passing
- Approximate message passing with spectral initialization for generalized linear models*
- A Unifying Tutorial on Approximate Message Passing
- Approximate Message Passing With Unitary Transformation for Robust Bilinear Recovery
- On the Convergence of Approximate Message Passing With Arbitrary Matrices
- Fixed Points of Generalized Approximate Message Passing With Arbitrary Matrices
- Vector Approximate Message Passing
- Binary Linear Classification and Feature Selection via Generalized Approximate Message Passing
- Approximate Message Passing With Consistent Parameter Estimation and Applications to Sparse Learning
Cited In (10)
- Constrained low-rank matrix estimation: phase transitions, approximate message passing and applications
- Perturbative construction of mean-field equations in extensive-rank matrix factorization and denoising
- A Unifying Tutorial on Approximate Message Passing
- Flexible low-rank statistical modeling with missing data and side information
- The decimation scheme for symmetric matrix factorization
- Universality of approximate message passing algorithms
- Unique sharp local minimum in \(\ell_1\)-minimization complete dictionary learning
- Low-Rank Matrix Estimation from Rank-One Projections by Unlifted Convex Optimization
- Approximate matrix completion based on cavity method
- Estimation of low-rank matrices via approximate message passing
This page was built for publication: Bilinear Generalized Approximate Message Passing—Part I: Derivation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4579568)