Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

From MaRDI portal
Publication:3586174

DOI10.1137/070697835zbMath1198.90321arXiv0706.4138OpenAlexW2118550318WikidataQ94409659 ScholiaQ94409659MaRDI QIDQ3586174

Benjamin Recht, Maryam Fazel, Pablo A. Parrilo

Publication date: 6 September 2010

Published in: SIAM Review (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0706.4138



Related Items

Solving orthogonal group synchronization via convex and low-rank optimization: tightness and landscape analysis, Tensor completion by multi-rank via unitary transformation, Iterative hard thresholding for low CP-rank tensor models, High-dimensional latent panel quantile regression with an application to asset pricing, Covariance prediction via convex optimization, Robust sensing of low-rank matrices with non-orthogonal sparse decomposition, Multiple change points detection in high-dimensional multivariate regression, Low-rank matrix recovery problem minimizing a new ratio of two norms approximating the rank function then using an ADMM-type solver with applications, Transformed Schatten-1 penalty based full-rank latent label learning for incomplete multi-label classification, Block-sparse recovery and rank minimization using a weighted \(l_p-l_q\) model, Fitting feature-dependent Markov chains, Color image inpainting based on low-rank quaternion matrix factorization, Matrix completion with column outliers and sparse noise, Robust Recovery of Low-Rank Matrices and Low-Tubal-Rank Tensors from Noisy Sketches, Generalized two-dimensional linear discriminant analysis with regularization, Inertial proximal ADMM for separable multi-block convex optimizations and compressive affine phase retrieval, Robust Recommendation via Social Network Enhanced Matrix Completion, A framework of regularized low-rank matrix models for regression and classification, Tensor rank reduction via coordinate flows, An efficient semi-proximal ADMM algorithm for low-rank and sparse regularized matrix minimization problems with real-world applications, Smooth over-parameterized solvers for non-smooth structured optimization, Revisiting Spectral Bundle Methods: Primal-Dual (Sub)linear Convergence Rates, Certifying the Absence of Spurious Local Minima at Infinity, Universal Features for High-Dimensional Learning and Inference, Calibrated multi-task subspace learning via binary group structure constraint, Adaptive tensor networks decomposition for high-order tensor recovery and compression, On Integrality in Semidefinite Programming for Discrete Optimization, Covariate-assisted matrix completion with multiple structural breaks, A parallel low rank matrix optimization method for recovering Internet traffic network data via link flow measurement, Inexact penalty decomposition methods for optimization problems with geometric constraints, A portmanteau local feature discrimination approach to the classification with high-dimensional matrix-variate data, Algorithmic Regularization in Model-Free Overparametrized Asymmetric Matrix Factorization, Accelerated matrix completion algorithm using continuation strategy and randomized SVD, Optimal Algorithms for Stochastic Complementary Composite Minimization, Sparse Reduced Rank Huber Regression in High Dimensions, The low-rank approximation of fourth-order partial-symmetric and conjugate partial-symmetric tensor, A singular value shrinkage thresholding algorithm for folded concave penalized low-rank matrix optimization problems, High-dimensional estimation of quadratic variation based on penalized realized variance, Further results on tensor nuclear norms, Smoothing fast proximal gradient algorithm for the relaxation of matrix rank regularization problem, Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation, Modewise operators, the tensor restricted isometry property, and low-rank tensor recovery, Sparsity-Inducing Nonconvex Nonseparable Regularization for Convex Image Processing, A Corrected Tensor Nuclear Norm Minimization Method for Noisy Low-Rank Tensor Completion, PCA Sparsified, Low-rank matrix estimation via nonconvex optimization methods in multi-response errors-in-variables regression, A dual basis approach to multidimensional scaling, A survey on compressed sensing approach to systems and control, Nonnegative Low Rank Matrix Completion by Riemannian Optimalization Methods, Inexact generalized ADMM with relative error criteria for linearly constrained convex optimization problems, Expectile trace regression via low-rank and group sparsity regularization, A Linearly Convergent Algorithm for Solving a Class of Nonconvex/Affine Feasibility Problems, Entropic Regularization of the ℓ 0 Function, Low-Rank and Sparse Multi-task Learning, New Analysis on Sparse Solutions to Random Standard Quadratic Optimization Problems and Extensions, Proximal Markov chain Monte Carlo algorithms, Stable recovery of low-rank matrix via nonconvex Schatten \(p\)-minimization, Book Review: A mathematical introduction to compressive sensing, An Adaptive Correction Approach for Tensor Completion, Self-calibration and biconvex compressive sensing, Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing, Tensor Completion in Hierarchical Tensor Representations, Low-Rank Inducing Norms with Optimality Interpretations, Rank-1 Tensor Properties with Applications to a Class of Tensor Optimization Problems, Optimization Methods for Synthetic Aperture Radar Imaging, Illumination Strategies for Intensity-Only Imaging, A General Theory of Singular Values with Applications to Signal Denoising, A Nonmonotone Alternating Updating Method for a Class of Matrix Factorization Problems, Exact matrix completion via convex optimization, Least-squares solutions and least-rank solutions of the matrix equation AXA *  = B and their relations, On the Schatten \(p\)-quasi-norm minimization for low-rank matrix recovery, Robust recovery of low-rank matrices with non-orthogonal sparse decomposition from incomplete measurements, An approximation method of CP rank for third-order tensor completion, Penalty decomposition methods for rank minimization, The finite steps of convergence of the fast thresholding algorithms with \(f\)-feedbacks in compressed sensing, Fast and Reliable Parameter Estimation from Nonlinear Observations, Quadratic Growth Conditions for Convex Matrix Optimization Problems Associated with Spectral Functions, Proximal linearization methods for Schatten \(p\)-quasi-norm minimization, A nonconvex approach to low-rank matrix completion using convex optimization, A successive difference-of-convex approximation method for a class of nonconvex nonsmooth optimization problems, Terracini convexity, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, Compressed sensing of low-rank plus sparse matrices, Multifrequency Interferometric Imaging with Intensity-Only Measurements, Low rank matrix minimization with a truncated difference of nuclear norm and Frobenius norm regularization, CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, High-dimensional estimation with geometric constraints: Table 1., Near-optimal estimation of simultaneously sparse and low-rank matrices from nested linear measurements, An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion, Stable low-rank matrix recovery via null space properties, Nuclear norm system identification with missing inputs and outputs, Compressive Sensing, T-product factorization based method for matrix and tensor completion problems, Uniqueness in nuclear norm minimization: flatness of the nuclear norm sphere and simultaneous polarization, Convex optimization for the planted \(k\)-disjoint-clique problem, Finding Planted Subgraphs with Few Eigenvalues using the Schur--Horn Relaxation, On a unified view of nullspace-type conditions for recoveries associated with general sparsity structures, Closed-loop identification of unstable systems using noncausal FIR models, Sparse PCA: optimal rates and adaptive estimation, Efficient algorithms for robust and stable principal component pursuit problems, Properties and methods for finding the best rank-one approximation to higher-order tensors, Lowest-rank solutions of continuous and discrete Lyapunov equations over symmetric cone, An introduction to a class of matrix cone programming, Median-Truncated Gradient Descent: A Robust and Scalable Nonconvex Approach for Signal Estimation, Nuclear norm based two-dimensional sparse principal component analysis, Consistency Analysis for Massively Inconsistent Datasets in Bound-to-Bound Data Collaboration, L2RM: Low-Rank Linear Regression Models for High-Dimensional Matrix Responses, On the equivalence between low-rank matrix completion and tensor rank, Simultaneous-shot inversion for PDE-constrained optimization problems with missing data, Adapting Regularized Low-Rank Models for Parallel Architectures, Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, ELASTIC-NET REGULARIZATION FOR LOW-RANK MATRIX RECOVERY, An alternating direction method for linear‐constrained matrix nuclear norm minimization, Unnamed Item, Unnamed Item, Implementing the Alternating Direction Method of Multipliers for Big Datasets: A Case Study of Least Absolute Shrinkage and Selection Operator, On minimal rank solutions to symmetric Lyapunov equations in Euclidean Jordan algebra, A proximal multiplier method for separable convex minimization, Operator-Lipschitz estimates for the singular value functional calculus, Optimal Kullback–Leibler approximation of Markov chains via nuclear norm regularisation, A Subgradient Method Based on Gradient Sampling for Solving Convex Optimization Problems, Collaborative Total Variation: A General Framework for Vectorial TV Models, Relations between least-squares and least-rank solutions of the matrix equation \(AXB=C\), Low Complexity Regularization of Linear Inverse Problems, Long horizon input parameterisations to enlarge the region of attraction of MPC, Recovery of Low Rank Symmetric Matrices via Schatten p Norm Minimization, Euclidean Distance Matrices and Applications, High dimensional covariance matrix estimation using multi-factor models from incomplete information, Convergence analysis of projected gradient descent for Schatten-\(p\) nonconvex matrix recovery, Decentralized and privacy-preserving low-rank matrix completion, Matrix completion and tensor rank, Linear Models Based on Noisy Data and the Frisch Scheme, Fractional minimal rank, Low-Rank Spectral Optimization via Gauge Duality, ORBITOPES, Guarantees of Riemannian Optimization for Low Rank Matrix Recovery, Dynamic Assortment Personalization in High Dimensions, Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions, NCSOStools: a computer algebra system for symbolic and numerical computation with noncommutative polynomials, Low-Rank Approximation and Completion of Positive Tensors, State-Space Modeling of Two-Dimensional Vector-Exponential Trajectories, Isolated calmness of solution mappings and exact recovery conditions for nuclear norm optimization problems, Efficient Matrix Sensing Using Rank-1 Gaussian Measurements, Low Rank Estimation of Similarities on Graphs, A Convex Relaxation to Compute the Nearest Structured Rank Deficient Matrix, EXACT LOW-RANK MATRIX RECOVERY VIA NONCONVEX SCHATTEN p-MINIMIZATION, A Subspace Method for Large-Scale Eigenvalue Optimization, Truncated $l_{1-2}$ Models for Sparse Recovery and Rank Minimization, Unnamed Item, An Overview of Computational Sparse Models and Their Applications in Artificial Intelligence, Forward–backward-based descent methods for composite variational inequalities, Low tubal rank tensor recovery using the Bürer-Monteiro factorisation approach. Application to optical coherence tomography, Low-rank matrix recovery with Ky Fan 2-\(k\)-norm, Poisson reduced-rank models with sparse loadings, Learning with tree tensor networks: complexity estimates and model selection, Regularized high dimension low tubal-rank tensor regression, A fast proximal iteratively reweighted nuclear norm algorithm for nonconvex low-rank matrix minimization problems, Noisy tensor completion via the sum-of-squares hierarchy, Fitting Laplacian regularized stratified Gaussian models, An inexact proximal DC algorithm with sieving strategy for rank constrained least squares semidefinite programming, Bias versus non-convexity in compressed sensing, Non-convex low-rank representation combined with rank-one matrix sum for subspace clustering, Augmented Lagrangian methods for convex matrix optimization problems, Kurdyka-Łojasiewicz exponent via inf-projection, Inertial alternating direction method of multipliers for non-convex non-smooth optimization, Enhanced alternating energy minimization methods for stochastic Galerkin matrix equations, Trading off \(1\)-norm and sparsity against rank for linear models using mathematical optimization: \(1\)-norm minimizing partially reflexive ah-symmetric generalized inverses, Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems, Low rank matrix recovery with impulsive noise, Cauchy noise removal by weighted nuclear norm minimization, The convex geometry of linear inverse problems, TILT: transform invariant low-rank textures, Monotonically convergent algorithms for symmetric tensor approximation, The minimal measurement number for low-rank matrix recovery, Proximal iteratively reweighted algorithm for low-rank matrix recovery, Compressed sensing and matrix completion with constant proportion of corruptions, \(\ell _p\) regularized low-rank approximation via iterative reweighted singular value minimization, Discussion: Latent variable graphical model selection via convex optimization, Rejoinder: Latent variable graphical model selection via convex optimization, Sparse blind deconvolution and demixing through \(\ell_{1,2}\)-minimization, Affine matrix rank minimization problem via non-convex fraction function penalty, Hybrid reconstruction of quantum density matrix: when low-rank meets sparsity, Accelerated linearized Bregman method, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Second order accurate distributed eigenvector computation for extremely large matrices, Painless breakups -- efficient demixing of low rank matrices, A new nonconvex approach to low-rank matrix completion with application to image inpainting, Restricted \(p\)-isometry properties of partially sparse signal recovery, Linear convergence of the randomized sparse Kaczmarz method, A model for influence of nuclear-electricity industry on area economy, Enhancing matrix completion using a modified second-order total variation, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization, Subspace-based spectrum estimation in innovation models by mixed norm minimization, Level-set methods for convex optimization, Learning semidefinite regularizers, Phase retrieval from Fourier measurements with masks, Low-rank matrix recovery using Gabidulin codes in characteristic zero, A mixture of nuclear norm and matrix factorization for tensor completion, Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all, Fixed-point algorithms for frequency estimation and structured low rank approximation, Fast and provable algorithms for spectrally sparse signal reconstruction via low-rank Hankel matrix completion, DC formulations and algorithms for sparse optimization problems, Convex low rank approximation, Regularization and the small-ball method. I: Sparse recovery, Tensor completion using total variation and low-rank matrix factorization, Optimizing shrinkage curves and application in image denoising, An augmented Lagrangian method for the optimal \(H_\infty\) model order reduction problem, Low-rank matrix completion using nuclear norm minimization and facial reduction, Equivalent Lipschitz surrogates for zero-norm and rank optimization problems, Block tensor train decomposition for missing data estimation, Convexifying the set of matrices of bounded rank: applications to the quasiconvexification and convexification of the rank function, On the subdifferential of symmetric convex functions of the spectrum for symmetric and orthogonally decomposable tensors, 2D compressed learning: support matrix machine with bilinear random projections, Learning latent variable Gaussian graphical model for biomolecular network with low sample complexity, Online optimization for max-norm regularization, On recovery guarantees for one-bit compressed sensing on manifolds, Estimation of the parameters of a weighted nuclear norm model and its application in image denoising, On finding a generalized lowest rank solution to a linear semi-definite feasibility problem, Approximating the minimum rank of a graph via alternating projection, Riemannian gradient descent methods for graph-regularized matrix completion, Quartic first-order methods for low-rank minimization, Low-rank matrix completion in a general non-orthogonal basis, Error bounds for rank constrained optimization problems and applications, Phase retrieval with PhaseLift algorithm, An efficient method for convex constrained rank minimization problems based on DC programming, Low-rank matrix recovery via regularized nuclear norm minimization, Speeding up finite-time consensus via minimal polynomial of a weighted graph -- a numerical approach, Robust visual tracking via consistent low-rank sparse learning, Oracle posterior contraction rates under hierarchical priors, Double fused Lasso regularized regression with both matrix and vector valued predictors, Regularization parameter selection for the low rank matrix recovery, Tensor theta norms and low rank recovery, Error bound of critical points and KL property of exponent 1/2 for squared F-norm regularized factorization, Tensor-free proximal methods for lifted bilinear/quadratic inverse problems with applications to phase retrieval, A new method based on the manifold-alternative approximating for low-rank matrix completion, Low-rank dynamic mode decomposition: an exact and tractable solution, Sampling from non-smooth distributions through Langevin diffusion, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, An adaptation for iterative structured matrix completion, Efficient proximal mapping computation for low-rank inducing norms, Low phase-rank approximation, An inexact symmetric ADMM algorithm with indefinite proximal term for sparse signal recovery and image restoration problems, On the robustness of minimum norm interpolators and regularized empirical risk minimizers, Riemannian conjugate gradient descent method for fixed multi rank third-order tensor completion, A semismooth Newton-based augmented Lagrangian algorithm for density matrix least squares problems, Tensor completion via a generalized transformed tensor t-product decomposition without t-SVD, An ensemble of high rank matrices arising from tournaments, Encoding inductive invariants as barrier certificates: synthesis via difference-of-convex programming, New challenges in covariance estimation: multiple structures and coarse quantization, Efficient low-rank regularization-based algorithms combining advanced techniques for solving tensor completion problems with application to color image recovering, New and explicit constructions of unbalanced Ramanujan bipartite graphs, A subgradient-based convex approximations method for DC programming and its applications, On linear convergence of projected gradient method for a class of affine rank minimization problems, Variational analysis of the Ky Fan \(k\)-norm, Unbiased risk estimates for matrix estimation in the elliptical case, Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery, \(S_{1/2}\) regularization methods and fixed point algorithms for affine rank minimization problems, Convex optimization learning of faithful Euclidean distance representations in nonlinear dimensionality reduction, Restricted isometry property of principal component pursuit with reduced linear measurements, When only global optimization matters, Parallel stochastic gradient algorithms for large-scale matrix completion, High-dimensional change-point estimation: combining filtering with convex optimization, Alternating proximal gradient method for convex minimization, Low rank tensor recovery via iterative hard thresholding, Novel alternating update method for low rank approximation of structured matrices, Channel estimation for finite scatterers massive multi-user MIMO system, Output-only identification of input-output models, Quantum tomography by regularized linear regressions, Generalizing CoSaMP to signals from a union of low dimensional linear subspaces, Iterative methods based on soft thresholding of hierarchical tensors, Guarantees of Riemannian optimization for low rank matrix completion, Enhanced image approximation using shifted rank-1 reconstruction, Rank-constrained optimization and its applications, The degrees of freedom of partly smooth regularizers, The local convexity of solving systems of quadratic equations, Spectral operators of matrices, An alternating minimization method for matrix completion problems, Bayesian rank penalization, Parametrized quasi-soft thresholding operator for compressed sensing and matrix completion, Sums of Hermitian squares decomposition of non-commutative polynomials in non-symmetric variables using NCSOStools, Iterative hard thresholding for low-rank recovery from rank-one projections, Exact semidefinite formulations for a class of (random and non-random) nonconvex quadratic programs, A singular value \(p\)-shrinkage thresholding algorithm for low rank matrix recovery, A simple convergence analysis of Bregman proximal gradient algorithm, Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution, Non-smooth non-convex Bregman minimization: unification and new algorithms, Fast self-adaptive regularization iterative algorithm for solving split feasibility problem, Miscellaneous reverse order laws for generalized inverses of matrix products with applications, Matrix completion with nonconvex regularization: spectral operators and scalable algorithms, Block-sparse recovery of semidefinite systems and generalized null space conditions, Estimating the backward error for the least-squares problem with multiple right-hand sides, Two relaxation methods for rank minimization problems, Multi-label learning with missing labels using mixed dependency graphs, An efficient method for clustered multi-metric learning, Online Schatten quasi-norm minimization for robust principal component analysis, Matrix completion for matrices with low-rank displacement, Gridless DOA estimation for minimum-redundancy linear array in nonuniform noise, Robust principal component analysis using facial reduction, A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery, Stable rank-one matrix completion is solved by the level \(2\) Lasserre relaxation, The left greatest common divisor and the left least common multiple for all solutions of the matrix equation \(BX = a\) over a commutative domain of elementary divisors, A relaxed interior point method for low-rank semidefinite programming problems with applications to matrix completion, An algorithm for matrix recovery of high-loss-rate network traffic data, A new graph parameter related to bounded rank positive semidefinite matrix completions, Prox-regularity of rank constraint sets and implications for algorithms, Matrix recipes for hard thresholding methods, Learning with tensors: a framework based on convex optimization and spectral regularization, Sharp RIP bound for sparse signal and low-rank matrix recovery, Convergence of projected Landweber iteration for matrix rank minimization, A reweighted nuclear norm minimization algorithm for low rank matrix recovery, Robust linear optimization under matrix completion, Sparse trace norm regularization, Homotopy method for matrix rank minimization based on the matrix hard thresholding method, Minimum rank Hermitian solution to the matrix approximation problem in the spectral norm and its application, On convex envelopes and regularization of non-convex functionals without moving global minima, Recovering low-rank and sparse matrix based on the truncated nuclear norm, A penalty decomposition method for rank minimization problem with affine constraints, Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach, Impossibility of dimension reduction in the nuclear norm, Robust Schatten-\(p\) norm based approach for tensor completion, A non-convex tensor rank approximation for tensor completion, On polyhedral and second-order cone decompositions of semidefinite optimization problems, Superresolution 2D DOA estimation for a rectangular array via reweighted decoupled atomic norm minimization, Optimally linearizing the alternating direction method of multipliers for convex programming, Matrix optimization over low-rank spectral sets: stationary points and local and global minimizers, Provable accelerated gradient method for nonconvex low rank optimization, Sharp oracle inequalities for low-complexity priors, RIP-based performance guarantee for low-tubal-rank tensor recovery, On privacy preserving data release of linear dynamic networks, Low-rank tensor completion via smooth matrix factorization, A class of alternating linearization algorithms for nonsmooth convex optimization, Enabling numerically exact local solver for waveform inversion -- a low-rank approach, Truncated sparse approximation property and truncated \(q\)-norm minimization, Alternating direction and Taylor expansion minimization algorithms for unconstrained nuclear norm optimization, Non-intrusive tensor reconstruction for high-dimensional random PDEs, Nonparametric estimation of low rank matrix valued function, An inexact dual logarithmic barrier method for solving sparse semidefinite programs, Optimal RIP bounds for sparse signals recovery via \(\ell_p\) minimization, Typical and generic ranks in matrix completion, Interference alignment based on rank constraint in MIMO cognitive radio networks, Optimization of the regularization in background and foreground modeling, Analysis of singular value thresholding algorithm for matrix completion, ROP: matrix recovery via rank-one projections, An efficient primal dual prox method for non-smooth optimization, A partial derandomization of phaselift using spherical designs, Set membership identification of switched linear systems with known number of subsystems, On the nuclear norm heuristic for a Hankel matrix completion problem, Coordinate descent algorithms, Splitting methods with variable metric for Kurdyka-Łojasiewicz functions and general convergence rates, Minimum rank (skew) Hermitian solutions to the matrix approximation problem in the spectral norm, Semi-supervised learning with nuclear norm regularization, Model-based spectral coherence analysis, An Exact and Robust Conformal Inference Method for Counterfactual and Synthetic Controls, Exact penalization for cardinality and rank-constrained optimization problems via partial regularization, The numerics of phase retrieval, On the Support of Compressed Modes, Sharp Restricted Isometry Bounds for the Inexistence of Spurious Local Minima in Nonconvex Matrix Recovery, Decentralized Dictionary Learning Over Time-Varying Digraphs, Maximum A Posteriori Inference of Random Dot Product Graphs via Conic Programming, WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions, An Unbiased Approach to Low Rank Recovery, Solving Natural Conic Formulations with Hypatia.jl, Mixed-Projection Conic Optimization: A New Paradigm for Modeling Rank Constraints, Unnamed Item, Column $\ell_{2,0}$-Norm Regularized Factorization Model of Low-Rank Matrix Recovery and Its Computation, Random Sampling and Reconstruction of Sparse Time- and Band-Limited Signals, GNMR: A Provable One-Line Algorithm for Low Rank Matrix Recovery, High-dimensional dynamic systems identification with additional constraints, A universal rank approximation method for matrix completion, Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization, Multistage Convex Relaxation Approach to Rank Regularized Minimization Problems Based on Equivalent Mathematical Program with a Generalized Complementarity Constraint, Recovery of low-rank matrices based on the rank null space properties, $N$-Dimensional Tensor Completion for Nuclear Magnetic Resonance Relaxometry, A Generalization of Wirtinger Flow for Exact Interferometric Inversion, A separable surrogate function method for sparse and low-rank matrices decomposition, A numerical investigation of direct and indirect closed-loop architectures for estimating nonminimum-phase zeros, Solving Partial Differential Equations on Manifolds From Incomplete Interpoint Distance, Colour of turbulence, Theoretical and Experimental Analyses of Tensor-Based Regression and Classification, An Equivalence between Critical Points for Rank Constraints Versus Low-Rank Factorizations, Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization, Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators, Higher-order total variation approaches and generalisations, Multiplicative Noise Removal: Nonlocal Low-Rank Model and Its Proximal Alternating Reweighted Minimization Algorithm, Multiview Clustering of Images with Tensor Rank Minimization via Nonconvex Approach, Krylov Methods for Low-Rank Regularization, Tensor Completion via Gaussian Process--Based Initialization, A NEW MODEL FOR SPARSE AND LOW-RANK MATRIX DECOMPOSITION, Unnamed Item, Unnamed Item, Unnamed Item, Matrix completion based on Gaussian parameterized belief propagation, Persistent homology for low-complexity models, An Optimal-Storage Approach to Semidefinite Programming Using Approximate Complementarity, A linear programming approach for designing multilevel PWM waveforms, Strong duality in robust semi-definite linear programming under data uncertainty, The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising, Compressed modes for variational problems in mathematics and physics, Finding Low-rank Solutions of Sparse Linear Matrix Inequalities using Convex Optimization, Low-Rank Tensor Recovery using Sequentially Optimal Modal Projections in Iterative Hard Thresholding (SeMPIHT), Convex Optimization and Parsimony of $L_p$-balls Representation, Primal-Dual Interior-Point Methods for Domain-Driven Formulations, An iterative algorithm for third-order tensor multi-rank minimization, Phase Retrieval via Matrix Completion, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, On phase retrieval via matrix completion and the estimation of low rank PSD matrices, PRIMME_SVDS: A High-Performance Preconditioned SVD Solver for Accurate Large-Scale Computations, Spectral Operators of Matrices: Semismoothness and Characterizations of the Generalized Jacobian, Flip-flop spectrum-revealing QR factorization and its applications to singular value decomposition, Adaptive Low-Nonnegative-Rank Approximation for State Aggregation of Markov Chains, Finding Low-Rank Solutions via Nonconvex Matrix Factorization, Efficiently and Provably, Modern regularization methods for inverse problems, On the Landscape of Synchronization Networks: A Perspective from Nonconvex Optimization, Fiber Sampling Approach to Canonical Polyadic Decomposition and Application to Tensor Completion, A Convex Approach to Superresolution and Regularization of Lines in Images, Relaxation algorithms for matrix completion, with applications to seismic travel-time data interpolation, Matrix completion via minimizing an approximate rank, Semidefinite Programming and Nash Equilibria in Bimatrix Games, Subsampling Algorithms for Semidefinite Programming, Incremental CP Tensor Decomposition by Alternating Minimization Method, A Splitting Augmented Lagrangian Method for Low Multilinear-Rank Tensor Recovery, Tensor Methods for Nonlinear Matrix Completion, Unnamed Item, Unnamed Item, Semidefinite Programming For Chance Constrained Optimization Over Semialgebraic Sets, An Efficient Gauss--Newton Algorithm for Symmetric Low-Rank Product Matrix Approximations, Nonlocal low-rank regularized two-phase approach for mixed noise removal, A Global Approach for Solving Edge-Matching Puzzles, Quasi-linear Compressed Sensing, On the Convergence of Projected-Gradient Methods with Low-Rank Projections for Smooth Convex Minimization over Trace-Norm Balls and Related Problems, Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, A Superlinearly Convergent Smoothing Newton Continuation Algorithm for Variational Inequalities over Definable Sets, Perturbation analysis of low-rank matrix stable recovery, Unnamed Item, Lower Memory Oblivious (Tensor) Subspace Embeddings with Fewer Random Bits: Modewise Methods for Least Squares, Bridging and Improving Theoretical and Computational Electrical Impedance Tomography via Data Completion, Robust Width: A Characterization of Uniformly Stable and Robust Compressed Sensing, Approximate matrix completion based on cavity method, Decoding from Pooled Data: Sharp Information-Theoretic Bounds, Multilinear Compressive Sensing and an Application to Convolutional Linear Networks, Stop Memorizing: A Data-Dependent Regularization Framework for Intrinsic Pattern Learning, Matrix Rigidity and the Ill-Posedness of Robust PCA and Matrix Completion, An analysis of noise folding for low-rank matrix recovery, ISLET: Fast and Optimal Low-Rank Tensor Regression via Importance Sketching, Low rank matrix recovery with adversarial sparse noise*, A novel robust principal component analysis method for image and video processing., A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery, A general family of trimmed estimators for robust high-dimensional data analysis, A penalty method for rank minimization problems in symmetric matrices, Two-stage convex relaxation approach to least squares loss constrained low-rank plus sparsity optimization problems, On the construction of general cubature formula by flat extensions, Optimal large-scale quantum state tomography with Pauli measurements, The minimal rank of matrix expressions with respect to Hermitian matrix-revised, Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls, Synthesizing invariant barrier certificates via difference-of-convex programming, Minimum \( n\)-rank approximation via iterative hard thresholding, A new gradient projection method for matrix completion, On the nuclear norm and the singular value decomposition of tensors, A proximal method for composite minimization, Geometric inference for general high-dimensional linear inverse problems, A rank-corrected procedure for matrix completion with fixed basis coefficients, Semidefinite programming and sums of Hermitian squares of noncommutative polynomials, Low-rank parameterization of planar domains for isogeometric analysis, Sharp MSE bounds for proximal denoising, On the need for structure modelling in sequence prediction, An improved robust ADMM algorithm for quantum state tomography, Tucker factorization with missing data with application to low-\(n\)-rank tensor completion, Checking strict positivity of Kraus maps is NP-hard, Improved recovery guarantees for phase retrieval from coded diffraction patterns, Low rank matrix recovery from rank one measurements, Trace regression model with simultaneously low rank and row(column) sparse parameter, Algorithmic aspects of sums of Hermitian squares of noncommutative polynomials, Low rank estimation of smooth kernels on graphs, A variational approach of the rank function, Stable analysis of compressive principal component pursuit, Exact low-rank matrix completion from sparsely corrupted entries via adaptive outlier pursuit, \(s\)-goodness for low-rank matrix recovery, Fast alternating linearization methods for minimizing the sum of two convex functions, Simple bounds for recovering low-complexity models, Some optimization problems on ranks and inertias of matrix-valued functions subject to linear matrix equation restrictions, Approximation accuracy, gradient methods, and error bound for structured convex optimization, A globally convergent algorithm for nonconvex optimization based on block coordinate update, Approximation of rank function and its application to the nearest low-rank correlation matrix, Sparse nonnegative matrix underapproximation and its application to hyperspectral image analysis, High-dimensional covariance matrix estimation with missing observations, Exact minimum rank approximation via Schatten \(p\)-norm minimization, Learning Markov random walks for robust subspace clustering and estimation, A proximal alternating linearization method for minimizing the sum of two convex functions, Sharp recovery bounds for convex demixing, with applications, Rank constrained matrix best approximation problem, Guaranteed recovery of planted cliques and dense subgraphs by convex relaxation, An approximation theory of matrix rank minimization and its application to quadratic equations, Formulas for calculating the extremum ranks and inertias of a four-term quadratic matrix-valued function and their applications, Null space conditions and thresholds for rank minimization, From compression to compressed sensing, A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery, An implementable proximal point algorithmic framework for nuclear norm minimization, Learning functions of few arbitrary linear parameters in high dimensions, Uniqueness conditions for low-rank matrix recovery, Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions, A kernel-based framework to tensorial data analysis, An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion, Solving optimization problems on ranks and inertias of some constrained nonlinear matrix functions via an algebraic linearization method, Guaranteed clustering and biclustering via semidefinite programming, Sparse recovery on Euclidean Jordan algebras, Nonmonotone Barzilai-Borwein gradient algorithm for \(\ell_1\)-regularized nonsmooth minimization in compressive sensing, Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion, Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression, A new approximation of the matrix rank function and its application to matrix rank minimization, Minimax risk of matrix denoising by singular value thresholding, Stable optimizationless recovery from phaseless linear measurements, Optimal rank-sparsity decomposition, Low-rank matrix recovery via rank one tight frame measurements, Equivalence and strong equivalence between the sparsest and least \(\ell _1\)-norm nonnegative solutions of linear systems and their applications, Analysis of convergence for the alternating direction method applied to joint sparse recovery, Conditional gradient algorithms for norm-regularized smooth convex optimization, Extreme point inequalities and geometry of the rank sparsity ball, A new algorithm for positive semidefinite matrix completion, A partial proximal point algorithm for nuclear norm regularized matrix least squares problems, Max-norm optimization for robust matrix recovery, Interpreting latent variables in factor models via convex optimization, Decomposable norm minimization with proximal-gradient homotopy algorithm, Dimensionality reduction with subgaussian matrices: a unified theory, Finding a low-rank basis in a matrix subspace, An alternating direction algorithm for matrix completion with nonnegative factors, Convergence of fixed-point continuation algorithms for matrix rank minimization, Fixed point and Bregman iterative methods for matrix rank minimization, Estimation of high-dimensional low-rank matrices, Estimation of (near) low-rank matrices with noise and high-dimensional scaling, Optimal selection of reduced rank estimators of high-dimensional matrices, Nuclear norm minimization for the planted clique and biclique problems, Explicit frames for deterministic phase retrieval via PhaseLift, Sparse functional identification of complex cells from spike times and the decoding of visual stimuli, Characterization of the equivalence of robustification and regularization in linear and matrix regression, Geometric median and robust estimation in Banach spaces, Latent variable graphical model selection via convex optimization, Convex optimization for the densest subgraph and densest submatrix problems, Robust recovery of complex exponential signals from random Gaussian projections via low rank Hankel matrix reconstruction, A simple prior-free method for non-rigid structure-from-motion factorization, Learning non-parametric basis independent models from point queries via low-rank methods, Fast global convergence of gradient methods for high-dimensional statistical recovery, Generalized ADMM with optimal indefinite proximal term for linearly constrained convex optimization, Parallel matrix factorization for low-rank tensor completion, An alternating direction method with continuation for nonconvex low rank minimization, On two continuum armed bandit problems in high dimensions


Uses Software