Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

From MaRDI portal
Publication:3586174


DOI10.1137/070697835zbMath1198.90321arXiv0706.4138WikidataQ94409659 ScholiaQ94409659MaRDI QIDQ3586174

Benjamin Recht, Maryam Fazel, Pablo A. Parrilo

Publication date: 6 September 2010

Published in: SIAM Review (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0706.4138


90C25: Convex programming

90C59: Approximation methods and heuristics in mathematical programming

15B52: Random matrices (algebraic aspects)


Related Items

Unnamed Item, Unnamed Item, Unnamed Item, Low-Rank Inducing Norms with Optimality Interpretations, Optimization Methods for Synthetic Aperture Radar Imaging, A General Theory of Singular Values with Applications to Signal Denoising, A Nonmonotone Alternating Updating Method for a Class of Matrix Factorization Problems, Fast and Reliable Parameter Estimation from Nonlinear Observations, Quadratic Growth Conditions for Convex Matrix Optimization Problems Associated with Spectral Functions, Multifrequency Interferometric Imaging with Intensity-Only Measurements, CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, High-dimensional estimation with geometric constraints: Table 1., Near-optimal estimation of simultaneously sparse and low-rank matrices from nested linear measurements, Stable low-rank matrix recovery via null space properties, Finding Planted Subgraphs with Few Eigenvalues using the Schur--Horn Relaxation, Nuclear norm based two-dimensional sparse principal component analysis, Consistency Analysis for Massively Inconsistent Datasets in Bound-to-Bound Data Collaboration, On the equivalence between low-rank matrix completion and tensor rank, Simultaneous-shot inversion for PDE-constrained optimization problems with missing data, Adapting Regularized Low-Rank Models for Parallel Architectures, Implementing the Alternating Direction Method of Multipliers for Big Datasets: A Case Study of Least Absolute Shrinkage and Selection Operator, Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, ELASTIC-NET REGULARIZATION FOR LOW-RANK MATRIX RECOVERY, An alternating direction method for linear‐constrained matrix nuclear norm minimization, Strong duality in robust semi-definite linear programming under data uncertainty, The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising, Compressed modes for variational problems in mathematics and physics, PRIMME_SVDS: A High-Performance Preconditioned SVD Solver for Accurate Large-Scale Computations, Subsampling Algorithms for Semidefinite Programming, A Splitting Augmented Lagrangian Method for Low Multilinear-Rank Tensor Recovery, A Global Approach for Solving Edge-Matching Puzzles, Quasi-linear Compressed Sensing, Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion, A Superlinearly Convergent Smoothing Newton Continuation Algorithm for Variational Inequalities over Definable Sets, On the Support of Compressed Modes, Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization, Multistage Convex Relaxation Approach to Rank Regularized Minimization Problems Based on Equivalent Mathematical Program with a Generalized Complementarity Constraint, Recovery of low-rank matrices based on the rank null space properties, Solving Partial Differential Equations on Manifolds From Incomplete Interpoint Distance, Colour of turbulence, Theoretical and Experimental Analyses of Tensor-Based Regression and Classification, Semidefinite Programming For Chance Constrained Optimization Over Semialgebraic Sets, An Efficient Gauss--Newton Algorithm for Symmetric Low-Rank Product Matrix Approximations, Finding Low-rank Solutions of Sparse Linear Matrix Inequalities using Convex Optimization, Low-Rank Tensor Recovery using Sequentially Optimal Modal Projections in Iterative Hard Thresholding (SeMPIHT), Convex Optimization and Parsimony of $L_p$-balls Representation, An iterative algorithm for third-order tensor multi-rank minimization, Phase Retrieval via Matrix Completion, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Exact matrix completion via convex optimization, Compressive Sensing, On minimal rank solutions to symmetric Lyapunov equations in Euclidean Jordan algebra, A proximal multiplier method for separable convex minimization, Operator-Lipschitz estimates for the singular value functional calculus, Optimal Kullback–Leibler approximation of Markov chains via nuclear norm regularisation, A Subgradient Method Based on Gradient Sampling for Solving Convex Optimization Problems, Collaborative Total Variation: A General Framework for Vectorial TV Models, Low Complexity Regularization of Linear Inverse Problems, Long horizon input parameterisations to enlarge the region of attraction of MPC, Recovery of Low Rank Symmetric Matrices via Schatten p Norm Minimization, Euclidean Distance Matrices and Applications, Matrix completion and tensor rank, Linear Models Based on Noisy Data and the Frisch Scheme, Low-Rank Spectral Optimization via Gauge Duality, Guarantees of Riemannian Optimization for Low Rank Matrix Recovery, Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions, Low-Rank Approximation and Completion of Positive Tensors, State-Space Modeling of Two-Dimensional Vector-Exponential Trajectories, Efficient Matrix Sensing Using Rank-1 Gaussian Measurements, Low Rank Estimation of Similarities on Graphs, EXACT LOW-RANK MATRIX RECOVERY VIA NONCONVEX SCHATTEN p-MINIMIZATION, An Overview of Computational Sparse Models and Their Applications in Artificial Intelligence, Forward–backward-based descent methods for composite variational inequalities, A Linearly Convergent Algorithm for Solving a Class of Nonconvex/Affine Feasibility Problems, Entropic Regularization of the ℓ 0 Function, Least-squares solutions and least-rank solutions of the matrix equation AXA *  = B and their relations, Penalty decomposition methods for rank minimization, A nonconvex approach to low-rank matrix completion using convex optimization, An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion, Closed-loop identification of unstable systems using noncausal FIR models, ORBITOPES, NCSOStools: a computer algebra system for symbolic and numerical computation with noncommutative polynomials, A Subspace Method for Large-Scale Eigenvalue Optimization, Truncated $l_{1-2}$ Models for Sparse Recovery and Rank Minimization, Book Review: A mathematical introduction to compressive sensing, An Adaptive Correction Approach for Tensor Completion, Illumination Strategies for Intensity-Only Imaging, Low-Rank and Sparse Multi-task Learning, New Analysis on Sparse Solutions to Random Standard Quadratic Optimization Problems and Extensions, Self-calibration and biconvex compressive sensing, Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing, Tensor Completion in Hierarchical Tensor Representations, Rank-1 Tensor Properties with Applications to a Class of Tensor Optimization Problems, Geometric median and robust estimation in Banach spaces, Latent variable graphical model selection via convex optimization, Parallel matrix factorization for low-rank tensor completion, An alternating direction method with continuation for nonconvex low rank minimization, On two continuum armed bandit problems in high dimensions, A novel robust principal component analysis method for image and video processing., Two-stage convex relaxation approach to least squares loss constrained low-rank plus sparsity optimization problems, On the construction of general cubature formula by flat extensions, Optimal large-scale quantum state tomography with Pauli measurements, The minimal rank of matrix expressions with respect to Hermitian matrix-revised, Minimum \( n\)-rank approximation via iterative hard thresholding, A new gradient projection method for matrix completion, On the nuclear norm and the singular value decomposition of tensors, A proximal method for composite minimization, Geometric inference for general high-dimensional linear inverse problems, A rank-corrected procedure for matrix completion with fixed basis coefficients, Sharp MSE bounds for proximal denoising, On the need for structure modelling in sequence prediction, An improved robust ADMM algorithm for quantum state tomography, Tucker factorization with missing data with application to low-\(n\)-rank tensor completion, Checking strict positivity of Kraus maps is NP-hard, Improved recovery guarantees for phase retrieval from coded diffraction patterns, Low rank matrix recovery from rank one measurements, Algorithmic aspects of sums of Hermitian squares of noncommutative polynomials, Low rank estimation of smooth kernels on graphs, A variational approach of the rank function, Exact low-rank matrix completion from sparsely corrupted entries via adaptive outlier pursuit, \(s\)-goodness for low-rank matrix recovery, Fast alternating linearization methods for minimizing the sum of two convex functions, Simple bounds for recovering low-complexity models, Some optimization problems on ranks and inertias of matrix-valued functions subject to linear matrix equation restrictions, Approximation of rank function and its application to the nearest low-rank correlation matrix, Sparse nonnegative matrix underapproximation and its application to hyperspectral image analysis, High-dimensional covariance matrix estimation with missing observations, Exact minimum rank approximation via Schatten \(p\)-norm minimization, Sharp recovery bounds for convex demixing, with applications, An approximation theory of matrix rank minimization and its application to quadratic equations, Formulas for calculating the extremum ranks and inertias of a four-term quadratic matrix-valued function and their applications, An implementable proximal point algorithmic framework for nuclear norm minimization, Learning functions of few arbitrary linear parameters in high dimensions, Uniqueness conditions for low-rank matrix recovery, Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions, A kernel-based framework to tensorial data analysis, An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion, Guaranteed clustering and biclustering via semidefinite programming, Sparse recovery on Euclidean Jordan algebras, Nonmonotone Barzilai-Borwein gradient algorithm for \(\ell_1\)-regularized nonsmooth minimization in compressive sensing, A new approximation of the matrix rank function and its application to matrix rank minimization, Minimax risk of matrix denoising by singular value thresholding, Stable optimizationless recovery from phaseless linear measurements, Optimal rank-sparsity decomposition, Equivalence and strong equivalence between the sparsest and least \(\ell _1\)-norm nonnegative solutions of linear systems and their applications, Conditional gradient algorithms for norm-regularized smooth convex optimization, Extreme point inequalities and geometry of the rank sparsity ball, A partial proximal point algorithm for nuclear norm regularized matrix least squares problems, Decomposable norm minimization with proximal-gradient homotopy algorithm, Dimensionality reduction with subgaussian matrices: a unified theory, Finding a low-rank basis in a matrix subspace, Convergence of fixed-point continuation algorithms for matrix rank minimization, Fixed point and Bregman iterative methods for matrix rank minimization, Estimation of high-dimensional low-rank matrices, Estimation of (near) low-rank matrices with noise and high-dimensional scaling, Optimal selection of reduced rank estimators of high-dimensional matrices, Approximation accuracy, gradient methods, and error bound for structured convex optimization, Null space conditions and thresholds for rank minimization, Solving optimization problems on ranks and inertias of some constrained nonlinear matrix functions via an algebraic linearization method, Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion, Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression, Low-rank matrix recovery via rank one tight frame measurements, Analysis of convergence for the alternating direction method applied to joint sparse recovery, A new algorithm for positive semidefinite matrix completion, Max-norm optimization for robust matrix recovery, Interpreting latent variables in factor models via convex optimization, An alternating direction algorithm for matrix completion with nonnegative factors, Nuclear norm minimization for the planted clique and biclique problems, Explicit frames for deterministic phase retrieval via PhaseLift, Sparse functional identification of complex cells from spike times and the decoding of visual stimuli, Characterization of the equivalence of robustification and regularization in linear and matrix regression, Robust recovery of complex exponential signals from random Gaussian projections via low rank Hankel matrix reconstruction, A simple prior-free method for non-rigid structure-from-motion factorization, Learning non-parametric basis independent models from point queries via low-rank methods, Fast global convergence of gradient methods for high-dimensional statistical recovery, Semidefinite programming and sums of Hermitian squares of noncommutative polynomials, Learning Markov random walks for robust subspace clustering and estimation, A proximal alternating linearization method for minimizing the sum of two convex functions, Rank constrained matrix best approximation problem, Guaranteed recovery of planted cliques and dense subgraphs by convex relaxation, From compression to compressed sensing, A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery, A general family of trimmed estimators for robust high-dimensional data analysis, A penalty method for rank minimization problems in symmetric matrices, Low-rank parameterization of planar domains for isogeometric analysis, Trace regression model with simultaneously low rank and row(column) sparse parameter, Stable analysis of compressive principal component pursuit, A globally convergent algorithm for nonconvex optimization based on block coordinate update, The minimal measurement number for low-rank matrix recovery, Proximal iteratively reweighted algorithm for low-rank matrix recovery, \(\ell _p\) regularized low-rank approximation via iterative reweighted singular value minimization, Sparse blind deconvolution and demixing through \(\ell_{1,2}\)-minimization, Affine matrix rank minimization problem via non-convex fraction function penalty, Hybrid reconstruction of quantum density matrix: when low-rank meets sparsity, Painless breakups -- efficient demixing of low rank matrices, A new nonconvex approach to low-rank matrix completion with application to image inpainting, Linear convergence of the randomized sparse Kaczmarz method, A model for influence of nuclear-electricity industry on area economy, Enhancing matrix completion using a modified second-order total variation, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization, Subspace-based spectrum estimation in innovation models by mixed norm minimization, Level-set methods for convex optimization, Learning semidefinite regularizers, Low-rank matrix recovery using Gabidulin codes in characteristic zero, A mixture of nuclear norm and matrix factorization for tensor completion, Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all, DC formulations and algorithms for sparse optimization problems, Regularization and the small-ball method. I: Sparse recovery, Tensor completion using total variation and low-rank matrix factorization, Low-rank matrix completion using nuclear norm minimization and facial reduction, Equivalent Lipschitz surrogates for zero-norm and rank optimization problems, Block tensor train decomposition for missing data estimation, Convexifying the set of matrices of bounded rank: applications to the quasiconvexification and convexification of the rank function, On finding a generalized lowest rank solution to a linear semi-definite feasibility problem, Approximating the minimum rank of a graph via alternating projection, Error bounds for rank constrained optimization problems and applications, An efficient method for convex constrained rank minimization problems based on DC programming, Speeding up finite-time consensus via minimal polynomial of a weighted graph -- a numerical approach, Robust visual tracking via consistent low-rank sparse learning, The convex geometry of linear inverse problems, TILT: transform invariant low-rank textures, Monotonically convergent algorithms for symmetric tensor approximation, Compressed sensing and matrix completion with constant proportion of corruptions, Discussion: Latent variable graphical model selection via convex optimization, Rejoinder: Latent variable graphical model selection via convex optimization, Accelerated linearized Bregman method, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Second order accurate distributed eigenvector computation for extremely large matrices, Restricted \(p\)-isometry properties of partially sparse signal recovery, Fixed-point algorithms for frequency estimation and structured low rank approximation, Fast and provable algorithms for spectrally sparse signal reconstruction via low-rank Hankel matrix completion, Convex low rank approximation, Optimizing shrinkage curves and application in image denoising, An augmented Lagrangian method for the optimal \(H_\infty\) model order reduction problem, Learning latent variable Gaussian graphical model for biomolecular network with low sample complexity, Online optimization for max-norm regularization, A new graph parameter related to bounded rank positive semidefinite matrix completions, Prox-regularity of rank constraint sets and implications for algorithms, Matrix recipes for hard thresholding methods, Learning with tensors: a framework based on convex optimization and spectral regularization, Sharp RIP bound for sparse signal and low-rank matrix recovery, Convergence of projected Landweber iteration for matrix rank minimization, A reweighted nuclear norm minimization algorithm for low rank matrix recovery, Robust linear optimization under matrix completion, Sparse trace norm regularization, ROP: matrix recovery via rank-one projections, An efficient primal dual prox method for non-smooth optimization, A partial derandomization of phaselift using spherical designs, Set membership identification of switched linear systems with known number of subsystems, On the nuclear norm heuristic for a Hankel matrix completion problem, Coordinate descent algorithms, Splitting methods with variable metric for Kurdyka-Łojasiewicz functions and general convergence rates, Minimum rank (skew) Hermitian solutions to the matrix approximation problem in the spectral norm, Semi-supervised learning with nuclear norm regularization, A subgradient-based convex approximations method for DC programming and its applications, On linear convergence of projected gradient method for a class of affine rank minimization problems, Variational analysis of the Ky Fan \(k\)-norm, Unbiased risk estimates for matrix estimation in the elliptical case, Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery, \(S_{1/2}\) regularization methods and fixed point algorithms for affine rank minimization problems, Convex optimization learning of faithful Euclidean distance representations in nonlinear dimensionality reduction, Restricted isometry property of principal component pursuit with reduced linear measurements, When only global optimization matters, Parallel stochastic gradient algorithms for large-scale matrix completion, High-dimensional change-point estimation: combining filtering with convex optimization, Alternating proximal gradient method for convex minimization, Low rank tensor recovery via iterative hard thresholding, Novel alternating update method for low rank approximation of structured matrices, Channel estimation for finite scatterers massive multi-user MIMO system, Iterative methods based on soft thresholding of hierarchical tensors, Rank-constrained optimization and its applications, The degrees of freedom of partly smooth regularizers, The local convexity of solving systems of quadratic equations, Spectral operators of matrices, Sums of Hermitian squares decomposition of non-commutative polynomials in non-symmetric variables using NCSOStools, Iterative hard thresholding for low-rank recovery from rank-one projections, A singular value \(p\)-shrinkage thresholding algorithm for low rank matrix recovery, A simple convergence analysis of Bregman proximal gradient algorithm, Non-smooth non-convex Bregman minimization: unification and new algorithms, Nuclear norm system identification with missing inputs and outputs, Convex optimization for the planted \(k\)-disjoint-clique problem, On a unified view of nullspace-type conditions for recoveries associated with general sparsity structures, Sparse PCA: optimal rates and adaptive estimation, Efficient algorithms for robust and stable principal component pursuit problems, Properties and methods for finding the best rank-one approximation to higher-order tensors, Lowest-rank solutions of continuous and discrete Lyapunov equations over symmetric cone, An introduction to a class of matrix cone programming, Relations between least-squares and least-rank solutions of the matrix equation \(AXB=C\), High dimensional covariance matrix estimation using multi-factor models from incomplete information, Convergence analysis of projected gradient descent for Schatten-\(p\) nonconvex matrix recovery, Decentralized and privacy-preserving low-rank matrix completion, Proximal Markov chain Monte Carlo algorithms, Stable recovery of low-rank matrix via nonconvex Schatten \(p\)-minimization


Uses Software