Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

From MaRDI portal
Revision as of 04:06, 5 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:3586174


DOI10.1137/070697835zbMath1198.90321arXiv0706.4138WikidataQ94409659 ScholiaQ94409659MaRDI QIDQ3586174

Benjamin Recht, Maryam Fazel, Pablo A. Parrilo

Publication date: 6 September 2010

Published in: SIAM Review (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0706.4138


90C25: Convex programming

90C59: Approximation methods and heuristics in mathematical programming

15B52: Random matrices (algebraic aspects)


Related Items

Geometric median and robust estimation in Banach spaces, Latent variable graphical model selection via convex optimization, Convex optimization for the densest subgraph and densest submatrix problems, Parallel matrix factorization for low-rank tensor completion, An alternating direction method with continuation for nonconvex low rank minimization, On two continuum armed bandit problems in high dimensions, A novel robust principal component analysis method for image and video processing., Two-stage convex relaxation approach to least squares loss constrained low-rank plus sparsity optimization problems, On the construction of general cubature formula by flat extensions, Optimal large-scale quantum state tomography with Pauli measurements, The minimal rank of matrix expressions with respect to Hermitian matrix-revised, Minimum \( n\)-rank approximation via iterative hard thresholding, A new gradient projection method for matrix completion, On the nuclear norm and the singular value decomposition of tensors, A proximal method for composite minimization, Geometric inference for general high-dimensional linear inverse problems, A rank-corrected procedure for matrix completion with fixed basis coefficients, Sharp MSE bounds for proximal denoising, On the need for structure modelling in sequence prediction, An improved robust ADMM algorithm for quantum state tomography, Tucker factorization with missing data with application to low-\(n\)-rank tensor completion, Checking strict positivity of Kraus maps is NP-hard, Improved recovery guarantees for phase retrieval from coded diffraction patterns, Low rank matrix recovery from rank one measurements, Algorithmic aspects of sums of Hermitian squares of noncommutative polynomials, Low rank estimation of smooth kernels on graphs, A variational approach of the rank function, Exact low-rank matrix completion from sparsely corrupted entries via adaptive outlier pursuit, \(s\)-goodness for low-rank matrix recovery, Fast alternating linearization methods for minimizing the sum of two convex functions, Simple bounds for recovering low-complexity models, Some optimization problems on ranks and inertias of matrix-valued functions subject to linear matrix equation restrictions, Approximation of rank function and its application to the nearest low-rank correlation matrix, Sparse nonnegative matrix underapproximation and its application to hyperspectral image analysis, High-dimensional covariance matrix estimation with missing observations, Exact minimum rank approximation via Schatten \(p\)-norm minimization, Sharp recovery bounds for convex demixing, with applications, An approximation theory of matrix rank minimization and its application to quadratic equations, Formulas for calculating the extremum ranks and inertias of a four-term quadratic matrix-valued function and their applications, An implementable proximal point algorithmic framework for nuclear norm minimization, Learning functions of few arbitrary linear parameters in high dimensions, Uniqueness conditions for low-rank matrix recovery, Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions, A kernel-based framework to tensorial data analysis, An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion, Guaranteed clustering and biclustering via semidefinite programming, Sparse recovery on Euclidean Jordan algebras, Nonmonotone Barzilai-Borwein gradient algorithm for \(\ell_1\)-regularized nonsmooth minimization in compressive sensing, A new approximation of the matrix rank function and its application to matrix rank minimization, Minimax risk of matrix denoising by singular value thresholding, Stable optimizationless recovery from phaseless linear measurements, Optimal rank-sparsity decomposition, Equivalence and strong equivalence between the sparsest and least \(\ell _1\)-norm nonnegative solutions of linear systems and their applications, Conditional gradient algorithms for norm-regularized smooth convex optimization, Extreme point inequalities and geometry of the rank sparsity ball, A partial proximal point algorithm for nuclear norm regularized matrix least squares problems, Decomposable norm minimization with proximal-gradient homotopy algorithm, Dimensionality reduction with subgaussian matrices: a unified theory, Finding a low-rank basis in a matrix subspace, Convergence of fixed-point continuation algorithms for matrix rank minimization, Fixed point and Bregman iterative methods for matrix rank minimization, Estimation of high-dimensional low-rank matrices, Estimation of (near) low-rank matrices with noise and high-dimensional scaling, Optimal selection of reduced rank estimators of high-dimensional matrices, Approximation accuracy, gradient methods, and error bound for structured convex optimization, Null space conditions and thresholds for rank minimization, Solving optimization problems on ranks and inertias of some constrained nonlinear matrix functions via an algebraic linearization method, Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion, Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression, Low-rank matrix recovery via rank one tight frame measurements, Analysis of convergence for the alternating direction method applied to joint sparse recovery, A new algorithm for positive semidefinite matrix completion, Max-norm optimization for robust matrix recovery, Interpreting latent variables in factor models via convex optimization, An alternating direction algorithm for matrix completion with nonnegative factors, Nuclear norm minimization for the planted clique and biclique problems, Explicit frames for deterministic phase retrieval via PhaseLift, Sparse functional identification of complex cells from spike times and the decoding of visual stimuli, Characterization of the equivalence of robustification and regularization in linear and matrix regression, Robust recovery of complex exponential signals from random Gaussian projections via low rank Hankel matrix reconstruction, A simple prior-free method for non-rigid structure-from-motion factorization, Learning non-parametric basis independent models from point queries via low-rank methods, Fast global convergence of gradient methods for high-dimensional statistical recovery, Generalized ADMM with optimal indefinite proximal term for linearly constrained convex optimization, A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery, Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls, Synthesizing invariant barrier certificates via difference-of-convex programming, Semidefinite programming and sums of Hermitian squares of noncommutative polynomials, Learning Markov random walks for robust subspace clustering and estimation, A proximal alternating linearization method for minimizing the sum of two convex functions, Rank constrained matrix best approximation problem, Guaranteed recovery of planted cliques and dense subgraphs by convex relaxation, From compression to compressed sensing, A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery, A general family of trimmed estimators for robust high-dimensional data analysis, A penalty method for rank minimization problems in symmetric matrices, Low-rank parameterization of planar domains for isogeometric analysis, Trace regression model with simultaneously low rank and row(column) sparse parameter, Stable analysis of compressive principal component pursuit, A globally convergent algorithm for nonconvex optimization based on block coordinate update, Unnamed Item, Unnamed Item, Unnamed Item, Semidefinite Programming and Nash Equilibria in Bimatrix Games, Incremental CP Tensor Decomposition by Alternating Minimization Method, Tensor Methods for Nonlinear Matrix Completion, Unnamed Item, Unnamed Item, Nonlocal low-rank regularized two-phase approach for mixed noise removal, Perturbation analysis of low-rank matrix stable recovery, Unnamed Item, Robust Width: A Characterization of Uniformly Stable and Robust Compressed Sensing, Decoding from Pooled Data: Sharp Information-Theoretic Bounds, Multilinear Compressive Sensing and an Application to Convolutional Linear Networks, Stop Memorizing: A Data-Dependent Regularization Framework for Intrinsic Pattern Learning, Matrix Rigidity and the Ill-Posedness of Robust PCA and Matrix Completion, ISLET: Fast and Optimal Low-Rank Tensor Regression via Importance Sketching, Low rank matrix recovery with adversarial sparse noise*, Maximum A Posteriori Inference of Random Dot Product Graphs via Conic Programming, WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions, An Unbiased Approach to Low Rank Recovery, Solving Natural Conic Formulations with Hypatia.jl, Mixed-Projection Conic Optimization: A New Paradigm for Modeling Rank Constraints, Unnamed Item, Column $\ell_{2,0}$-Norm Regularized Factorization Model of Low-Rank Matrix Recovery and Its Computation, Random Sampling and Reconstruction of Sparse Time- and Band-Limited Signals, GNMR: A Provable One-Line Algorithm for Low Rank Matrix Recovery, High-dimensional dynamic systems identification with additional constraints, A universal rank approximation method for matrix completion, $N$-Dimensional Tensor Completion for Nuclear Magnetic Resonance Relaxometry, A Generalization of Wirtinger Flow for Exact Interferometric Inversion, A separable surrogate function method for sparse and low-rank matrices decomposition, A numerical investigation of direct and indirect closed-loop architectures for estimating nonminimum-phase zeros, An Equivalence between Critical Points for Rank Constraints Versus Low-Rank Factorizations, Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization, Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators, Higher-order total variation approaches and generalisations, Multiplicative Noise Removal: Nonlocal Low-Rank Model and Its Proximal Alternating Reweighted Minimization Algorithm, Multiview Clustering of Images with Tensor Rank Minimization via Nonconvex Approach, Krylov Methods for Low-Rank Regularization, Tensor Completion via Gaussian Process--Based Initialization, A NEW MODEL FOR SPARSE AND LOW-RANK MATRIX DECOMPOSITION, Matrix completion based on Gaussian parameterized belief propagation, Persistent homology for low-complexity models, An Optimal-Storage Approach to Semidefinite Programming Using Approximate Complementarity, A linear programming approach for designing multilevel PWM waveforms, Strong duality in robust semi-definite linear programming under data uncertainty, The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising, Compressed modes for variational problems in mathematics and physics, Primal-Dual Interior-Point Methods for Domain-Driven Formulations, On phase retrieval via matrix completion and the estimation of low rank PSD matrices, PRIMME_SVDS: A High-Performance Preconditioned SVD Solver for Accurate Large-Scale Computations, Spectral Operators of Matrices: Semismoothness and Characterizations of the Generalized Jacobian, Flip-flop spectrum-revealing QR factorization and its applications to singular value decomposition, Adaptive Low-Nonnegative-Rank Approximation for State Aggregation of Markov Chains, Finding Low-Rank Solutions via Nonconvex Matrix Factorization, Efficiently and Provably, Modern regularization methods for inverse problems, On the Landscape of Synchronization Networks: A Perspective from Nonconvex Optimization, Fiber Sampling Approach to Canonical Polyadic Decomposition and Application to Tensor Completion, A Convex Approach to Superresolution and Regularization of Lines in Images, Relaxation algorithms for matrix completion, with applications to seismic travel-time data interpolation, Matrix completion via minimizing an approximate rank, Subsampling Algorithms for Semidefinite Programming, A Splitting Augmented Lagrangian Method for Low Multilinear-Rank Tensor Recovery, A Global Approach for Solving Edge-Matching Puzzles, Quasi-linear Compressed Sensing, Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion, A Superlinearly Convergent Smoothing Newton Continuation Algorithm for Variational Inequalities over Definable Sets, On the Support of Compressed Modes, Sharp Restricted Isometry Bounds for the Inexistence of Spurious Local Minima in Nonconvex Matrix Recovery, Decentralized Dictionary Learning Over Time-Varying Digraphs, Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization, Multistage Convex Relaxation Approach to Rank Regularized Minimization Problems Based on Equivalent Mathematical Program with a Generalized Complementarity Constraint, Recovery of low-rank matrices based on the rank null space properties, Solving Partial Differential Equations on Manifolds From Incomplete Interpoint Distance, Colour of turbulence, Theoretical and Experimental Analyses of Tensor-Based Regression and Classification, Semidefinite Programming For Chance Constrained Optimization Over Semialgebraic Sets, An Efficient Gauss--Newton Algorithm for Symmetric Low-Rank Product Matrix Approximations, Finding Low-rank Solutions of Sparse Linear Matrix Inequalities using Convex Optimization, Low-Rank Tensor Recovery using Sequentially Optimal Modal Projections in Iterative Hard Thresholding (SeMPIHT), Convex Optimization and Parsimony of $L_p$-balls Representation, An iterative algorithm for third-order tensor multi-rank minimization, Phase Retrieval via Matrix Completion, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, On the Convergence of Projected-Gradient Methods with Low-Rank Projections for Smooth Convex Minimization over Trace-Norm Balls and Related Problems, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Lower Memory Oblivious (Tensor) Subspace Embeddings with Fewer Random Bits: Modewise Methods for Least Squares, Bridging and Improving Theoretical and Computational Electrical Impedance Tomography via Data Completion, Approximate matrix completion based on cavity method, An analysis of noise folding for low-rank matrix recovery, Model-based spectral coherence analysis, An Exact and Robust Conformal Inference Method for Counterfactual and Synthetic Controls, Exact penalization for cardinality and rank-constrained optimization problems via partial regularization, The numerics of phase retrieval, Output-only identification of input-output models, Quantum tomography by regularized linear regressions, Generalizing CoSaMP to signals from a union of low dimensional linear subspaces, Guarantees of Riemannian optimization for low rank matrix completion, Enhanced image approximation using shifted rank-1 reconstruction, An alternating minimization method for matrix completion problems, Bayesian rank penalization, Parametrized quasi-soft thresholding operator for compressed sensing and matrix completion, Exact semidefinite formulations for a class of (random and non-random) nonconvex quadratic programs, Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution, Fast self-adaptive regularization iterative algorithm for solving split feasibility problem, Miscellaneous reverse order laws for generalized inverses of matrix products with applications, Matrix completion with nonconvex regularization: spectral operators and scalable algorithms, Block-sparse recovery of semidefinite systems and generalized null space conditions, Estimating the backward error for the least-squares problem with multiple right-hand sides, Two relaxation methods for rank minimization problems, Multi-label learning with missing labels using mixed dependency graphs, An efficient method for clustered multi-metric learning, Online Schatten quasi-norm minimization for robust principal component analysis, Matrix completion for matrices with low-rank displacement, Gridless DOA estimation for minimum-redundancy linear array in nonuniform noise, Robust principal component analysis using facial reduction, A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery, Stable rank-one matrix completion is solved by the level \(2\) Lasserre relaxation, The left greatest common divisor and the left least common multiple for all solutions of the matrix equation \(BX = a\) over a commutative domain of elementary divisors, A relaxed interior point method for low-rank semidefinite programming problems with applications to matrix completion, An algorithm for matrix recovery of high-loss-rate network traffic data, A new graph parameter related to bounded rank positive semidefinite matrix completions, Prox-regularity of rank constraint sets and implications for algorithms, Matrix recipes for hard thresholding methods, Learning with tensors: a framework based on convex optimization and spectral regularization, Sharp RIP bound for sparse signal and low-rank matrix recovery, Convergence of projected Landweber iteration for matrix rank minimization, A reweighted nuclear norm minimization algorithm for low rank matrix recovery, Robust linear optimization under matrix completion, Sparse trace norm regularization, Homotopy method for matrix rank minimization based on the matrix hard thresholding method, Minimum rank Hermitian solution to the matrix approximation problem in the spectral norm and its application, On convex envelopes and regularization of non-convex functionals without moving global minima, Recovering low-rank and sparse matrix based on the truncated nuclear norm, A penalty decomposition method for rank minimization problem with affine constraints, Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach, Impossibility of dimension reduction in the nuclear norm, Robust Schatten-\(p\) norm based approach for tensor completion, A non-convex tensor rank approximation for tensor completion, On polyhedral and second-order cone decompositions of semidefinite optimization problems, Superresolution 2D DOA estimation for a rectangular array via reweighted decoupled atomic norm minimization, Optimally linearizing the alternating direction method of multipliers for convex programming, Matrix optimization over low-rank spectral sets: stationary points and local and global minimizers, Provable accelerated gradient method for nonconvex low rank optimization, Sharp oracle inequalities for low-complexity priors, RIP-based performance guarantee for low-tubal-rank tensor recovery, On privacy preserving data release of linear dynamic networks, Low-rank tensor completion via smooth matrix factorization, A class of alternating linearization algorithms for nonsmooth convex optimization, Enabling numerically exact local solver for waveform inversion -- a low-rank approach, Truncated sparse approximation property and truncated \(q\)-norm minimization, Alternating direction and Taylor expansion minimization algorithms for unconstrained nuclear norm optimization, Non-intrusive tensor reconstruction for high-dimensional random PDEs, Nonparametric estimation of low rank matrix valued function, An inexact dual logarithmic barrier method for solving sparse semidefinite programs, Optimal RIP bounds for sparse signals recovery via \(\ell_p\) minimization, Typical and generic ranks in matrix completion, Interference alignment based on rank constraint in MIMO cognitive radio networks, Optimization of the regularization in background and foreground modeling, Analysis of singular value thresholding algorithm for matrix completion, ROP: matrix recovery via rank-one projections, An efficient primal dual prox method for non-smooth optimization, A partial derandomization of phaselift using spherical designs, Set membership identification of switched linear systems with known number of subsystems, On the nuclear norm heuristic for a Hankel matrix completion problem, Coordinate descent algorithms, Splitting methods with variable metric for Kurdyka-Łojasiewicz functions and general convergence rates, Minimum rank (skew) Hermitian solutions to the matrix approximation problem in the spectral norm, Semi-supervised learning with nuclear norm regularization, A subgradient-based convex approximations method for DC programming and its applications, On linear convergence of projected gradient method for a class of affine rank minimization problems, Variational analysis of the Ky Fan \(k\)-norm, Unbiased risk estimates for matrix estimation in the elliptical case, Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery, \(S_{1/2}\) regularization methods and fixed point algorithms for affine rank minimization problems, Convex optimization learning of faithful Euclidean distance representations in nonlinear dimensionality reduction, Restricted isometry property of principal component pursuit with reduced linear measurements, When only global optimization matters, Parallel stochastic gradient algorithms for large-scale matrix completion, High-dimensional change-point estimation: combining filtering with convex optimization, Alternating proximal gradient method for convex minimization, Low rank tensor recovery via iterative hard thresholding, Novel alternating update method for low rank approximation of structured matrices, Channel estimation for finite scatterers massive multi-user MIMO system, Iterative methods based on soft thresholding of hierarchical tensors, Rank-constrained optimization and its applications, The degrees of freedom of partly smooth regularizers, The local convexity of solving systems of quadratic equations, Spectral operators of matrices, Sums of Hermitian squares decomposition of non-commutative polynomials in non-symmetric variables using NCSOStools, Iterative hard thresholding for low-rank recovery from rank-one projections, A singular value \(p\)-shrinkage thresholding algorithm for low rank matrix recovery, A simple convergence analysis of Bregman proximal gradient algorithm, Non-smooth non-convex Bregman minimization: unification and new algorithms, The minimal measurement number for low-rank matrix recovery, Proximal iteratively reweighted algorithm for low-rank matrix recovery, \(\ell _p\) regularized low-rank approximation via iterative reweighted singular value minimization, Sparse blind deconvolution and demixing through \(\ell_{1,2}\)-minimization, Affine matrix rank minimization problem via non-convex fraction function penalty, Hybrid reconstruction of quantum density matrix: when low-rank meets sparsity, Painless breakups -- efficient demixing of low rank matrices, A new nonconvex approach to low-rank matrix completion with application to image inpainting, Linear convergence of the randomized sparse Kaczmarz method, A model for influence of nuclear-electricity industry on area economy, Enhancing matrix completion using a modified second-order total variation, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization, Subspace-based spectrum estimation in innovation models by mixed norm minimization, Level-set methods for convex optimization, Learning semidefinite regularizers, Low-rank matrix recovery using Gabidulin codes in characteristic zero, A mixture of nuclear norm and matrix factorization for tensor completion, Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all, DC formulations and algorithms for sparse optimization problems, Regularization and the small-ball method. I: Sparse recovery, Tensor completion using total variation and low-rank matrix factorization, Low-rank matrix completion using nuclear norm minimization and facial reduction, Equivalent Lipschitz surrogates for zero-norm and rank optimization problems, Block tensor train decomposition for missing data estimation, Convexifying the set of matrices of bounded rank: applications to the quasiconvexification and convexification of the rank function, On finding a generalized lowest rank solution to a linear semi-definite feasibility problem, Approximating the minimum rank of a graph via alternating projection, Error bounds for rank constrained optimization problems and applications, An efficient method for convex constrained rank minimization problems based on DC programming, Speeding up finite-time consensus via minimal polynomial of a weighted graph -- a numerical approach, Robust visual tracking via consistent low-rank sparse learning, The convex geometry of linear inverse problems, TILT: transform invariant low-rank textures, Monotonically convergent algorithms for symmetric tensor approximation, Compressed sensing and matrix completion with constant proportion of corruptions, Discussion: Latent variable graphical model selection via convex optimization, Rejoinder: Latent variable graphical model selection via convex optimization, Accelerated linearized Bregman method, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Second order accurate distributed eigenvector computation for extremely large matrices, Restricted \(p\)-isometry properties of partially sparse signal recovery, Phase retrieval from Fourier measurements with masks, Fixed-point algorithms for frequency estimation and structured low rank approximation, Fast and provable algorithms for spectrally sparse signal reconstruction via low-rank Hankel matrix completion, Convex low rank approximation, Optimizing shrinkage curves and application in image denoising, An augmented Lagrangian method for the optimal \(H_\infty\) model order reduction problem, On the subdifferential of symmetric convex functions of the spectrum for symmetric and orthogonally decomposable tensors, 2D compressed learning: support matrix machine with bilinear random projections, Learning latent variable Gaussian graphical model for biomolecular network with low sample complexity, Online optimization for max-norm regularization, On recovery guarantees for one-bit compressed sensing on manifolds, Estimation of the parameters of a weighted nuclear norm model and its application in image denoising, Riemannian gradient descent methods for graph-regularized matrix completion, Quartic first-order methods for low-rank minimization, Low-rank matrix completion in a general non-orthogonal basis, Phase retrieval with PhaseLift algorithm, Low-rank matrix recovery via regularized nuclear norm minimization, Oracle posterior contraction rates under hierarchical priors, Double fused Lasso regularized regression with both matrix and vector valued predictors, Regularization parameter selection for the low rank matrix recovery, Tensor theta norms and low rank recovery, Error bound of critical points and KL property of exponent 1/2 for squared F-norm regularized factorization, Tensor-free proximal methods for lifted bilinear/quadratic inverse problems with applications to phase retrieval, A new method based on the manifold-alternative approximating for low-rank matrix completion, Low-rank dynamic mode decomposition: an exact and tractable solution, Sampling from non-smooth distributions through Langevin diffusion, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, An adaptation for iterative structured matrix completion, Efficient proximal mapping computation for low-rank inducing norms, Low phase-rank approximation, An inexact symmetric ADMM algorithm with indefinite proximal term for sparse signal recovery and image restoration problems, On the robustness of minimum norm interpolators and regularized empirical risk minimizers, Riemannian conjugate gradient descent method for fixed multi rank third-order tensor completion, A semismooth Newton-based augmented Lagrangian algorithm for density matrix least squares problems, Tensor completion via a generalized transformed tensor t-product decomposition without t-SVD, An ensemble of high rank matrices arising from tournaments, Encoding inductive invariants as barrier certificates: synthesis via difference-of-convex programming, New challenges in covariance estimation: multiple structures and coarse quantization, Efficient low-rank regularization-based algorithms combining advanced techniques for solving tensor completion problems with application to color image recovering, New and explicit constructions of unbalanced Ramanujan bipartite graphs, Low tubal rank tensor recovery using the Bürer-Monteiro factorisation approach. Application to optical coherence tomography, Low-rank matrix recovery with Ky Fan 2-\(k\)-norm, Poisson reduced-rank models with sparse loadings, Learning with tree tensor networks: complexity estimates and model selection, Regularized high dimension low tubal-rank tensor regression, A fast proximal iteratively reweighted nuclear norm algorithm for nonconvex low-rank matrix minimization problems, Noisy tensor completion via the sum-of-squares hierarchy, Fitting Laplacian regularized stratified Gaussian models, An inexact proximal DC algorithm with sieving strategy for rank constrained least squares semidefinite programming, Bias versus non-convexity in compressed sensing, Non-convex low-rank representation combined with rank-one matrix sum for subspace clustering, Augmented Lagrangian methods for convex matrix optimization problems, Kurdyka-Łojasiewicz exponent via inf-projection, Inertial alternating direction method of multipliers for non-convex non-smooth optimization, Enhanced alternating energy minimization methods for stochastic Galerkin matrix equations, Trading off \(1\)-norm and sparsity against rank for linear models using mathematical optimization: \(1\)-norm minimizing partially reflexive ah-symmetric generalized inverses, Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems, Low rank matrix recovery with impulsive noise, Cauchy noise removal by weighted nuclear norm minimization, Unnamed Item, Unnamed Item, Unnamed Item, Low-Rank Inducing Norms with Optimality Interpretations, Optimization Methods for Synthetic Aperture Radar Imaging, A General Theory of Singular Values with Applications to Signal Denoising, A Nonmonotone Alternating Updating Method for a Class of Matrix Factorization Problems, Fast and Reliable Parameter Estimation from Nonlinear Observations, Quadratic Growth Conditions for Convex Matrix Optimization Problems Associated with Spectral Functions, Multifrequency Interferometric Imaging with Intensity-Only Measurements, CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, High-dimensional estimation with geometric constraints: Table 1., Near-optimal estimation of simultaneously sparse and low-rank matrices from nested linear measurements, Stable low-rank matrix recovery via null space properties, Finding Planted Subgraphs with Few Eigenvalues using the Schur--Horn Relaxation, Nuclear norm based two-dimensional sparse principal component analysis, Consistency Analysis for Massively Inconsistent Datasets in Bound-to-Bound Data Collaboration, On the equivalence between low-rank matrix completion and tensor rank, Simultaneous-shot inversion for PDE-constrained optimization problems with missing data, Adapting Regularized Low-Rank Models for Parallel Architectures, Implementing the Alternating Direction Method of Multipliers for Big Datasets: A Case Study of Least Absolute Shrinkage and Selection Operator, Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, ELASTIC-NET REGULARIZATION FOR LOW-RANK MATRIX RECOVERY, An alternating direction method for linear‐constrained matrix nuclear norm minimization, Fractional minimal rank, Isolated calmness of solution mappings and exact recovery conditions for nuclear norm optimization problems, A Convex Relaxation to Compute the Nearest Structured Rank Deficient Matrix, A successive difference-of-convex approximation method for a class of nonconvex nonsmooth optimization problems, Nuclear norm system identification with missing inputs and outputs, Convex optimization for the planted \(k\)-disjoint-clique problem, On a unified view of nullspace-type conditions for recoveries associated with general sparsity structures, Sparse PCA: optimal rates and adaptive estimation, Efficient algorithms for robust and stable principal component pursuit problems, Properties and methods for finding the best rank-one approximation to higher-order tensors, Lowest-rank solutions of continuous and discrete Lyapunov equations over symmetric cone, An introduction to a class of matrix cone programming, Relations between least-squares and least-rank solutions of the matrix equation \(AXB=C\), High dimensional covariance matrix estimation using multi-factor models from incomplete information, Convergence analysis of projected gradient descent for Schatten-\(p\) nonconvex matrix recovery, Decentralized and privacy-preserving low-rank matrix completion, Proximal Markov chain Monte Carlo algorithms, Stable recovery of low-rank matrix via nonconvex Schatten \(p\)-minimization, Exact matrix completion via convex optimization, On the Schatten \(p\)-quasi-norm minimization for low-rank matrix recovery, Robust recovery of low-rank matrices with non-orthogonal sparse decomposition from incomplete measurements, An approximation method of CP rank for third-order tensor completion, The finite steps of convergence of the fast thresholding algorithms with \(f\)-feedbacks in compressed sensing, Proximal linearization methods for Schatten \(p\)-quasi-norm minimization, Terracini convexity, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, Compressed sensing of low-rank plus sparse matrices, Low rank matrix minimization with a truncated difference of nuclear norm and Frobenius norm regularization, Compressive Sensing, T-product factorization based method for matrix and tensor completion problems, Uniqueness in nuclear norm minimization: flatness of the nuclear norm sphere and simultaneous polarization, On minimal rank solutions to symmetric Lyapunov equations in Euclidean Jordan algebra, A proximal multiplier method for separable convex minimization, Operator-Lipschitz estimates for the singular value functional calculus, Optimal Kullback–Leibler approximation of Markov chains via nuclear norm regularisation, A Subgradient Method Based on Gradient Sampling for Solving Convex Optimization Problems, Collaborative Total Variation: A General Framework for Vectorial TV Models, Low Complexity Regularization of Linear Inverse Problems, Long horizon input parameterisations to enlarge the region of attraction of MPC, Recovery of Low Rank Symmetric Matrices via Schatten p Norm Minimization, Euclidean Distance Matrices and Applications, Matrix completion and tensor rank, Linear Models Based on Noisy Data and the Frisch Scheme, Low-Rank Spectral Optimization via Gauge Duality, Guarantees of Riemannian Optimization for Low Rank Matrix Recovery, Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions, Low-Rank Approximation and Completion of Positive Tensors, State-Space Modeling of Two-Dimensional Vector-Exponential Trajectories, Efficient Matrix Sensing Using Rank-1 Gaussian Measurements, Low Rank Estimation of Similarities on Graphs, EXACT LOW-RANK MATRIX RECOVERY VIA NONCONVEX SCHATTEN p-MINIMIZATION, An Overview of Computational Sparse Models and Their Applications in Artificial Intelligence, Forward–backward-based descent methods for composite variational inequalities, A Linearly Convergent Algorithm for Solving a Class of Nonconvex/Affine Feasibility Problems, Entropic Regularization of the ℓ 0 Function, Least-squares solutions and least-rank solutions of the matrix equation AXA *  = B and their relations, Penalty decomposition methods for rank minimization, A nonconvex approach to low-rank matrix completion using convex optimization, An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion, Closed-loop identification of unstable systems using noncausal FIR models, ORBITOPES, NCSOStools: a computer algebra system for symbolic and numerical computation with noncommutative polynomials, A Subspace Method for Large-Scale Eigenvalue Optimization, Truncated $l_{1-2}$ Models for Sparse Recovery and Rank Minimization, Book Review: A mathematical introduction to compressive sensing, An Adaptive Correction Approach for Tensor Completion, Illumination Strategies for Intensity-Only Imaging, Median-Truncated Gradient Descent: A Robust and Scalable Nonconvex Approach for Signal Estimation, L2RM: Low-Rank Linear Regression Models for High-Dimensional Matrix Responses, Dynamic Assortment Personalization in High Dimensions, Low-Rank and Sparse Multi-task Learning, New Analysis on Sparse Solutions to Random Standard Quadratic Optimization Problems and Extensions, Self-calibration and biconvex compressive sensing, Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing, Tensor Completion in Hierarchical Tensor Representations, Rank-1 Tensor Properties with Applications to a Class of Tensor Optimization Problems, Solving orthogonal group synchronization via convex and low-rank optimization: tightness and landscape analysis, Tensor completion by multi-rank via unitary transformation, Iterative hard thresholding for low CP-rank tensor models, High-dimensional latent panel quantile regression with an application to asset pricing, Covariance prediction via convex optimization, Robust sensing of low-rank matrices with non-orthogonal sparse decomposition, Multiple change points detection in high-dimensional multivariate regression, Low-rank matrix recovery problem minimizing a new ratio of two norms approximating the rank function then using an ADMM-type solver with applications, Transformed Schatten-1 penalty based full-rank latent label learning for incomplete multi-label classification, Block-sparse recovery and rank minimization using a weighted \(l_p-l_q\) model, Fitting feature-dependent Markov chains, Color image inpainting based on low-rank quaternion matrix factorization, Matrix completion with column outliers and sparse noise, Robust Recovery of Low-Rank Matrices and Low-Tubal-Rank Tensors from Noisy Sketches, Generalized two-dimensional linear discriminant analysis with regularization, Inertial proximal ADMM for separable multi-block convex optimizations and compressive affine phase retrieval, Robust Recommendation via Social Network Enhanced Matrix Completion, A framework of regularized low-rank matrix models for regression and classification, Tensor rank reduction via coordinate flows, An efficient semi-proximal ADMM algorithm for low-rank and sparse regularized matrix minimization problems with real-world applications, Smooth over-parameterized solvers for non-smooth structured optimization, Revisiting Spectral Bundle Methods: Primal-Dual (Sub)linear Convergence Rates, Certifying the Absence of Spurious Local Minima at Infinity, Universal Features for High-Dimensional Learning and Inference, Calibrated multi-task subspace learning via binary group structure constraint, Adaptive tensor networks decomposition for high-order tensor recovery and compression, On Integrality in Semidefinite Programming for Discrete Optimization, Covariate-assisted matrix completion with multiple structural breaks, A parallel low rank matrix optimization method for recovering Internet traffic network data via link flow measurement, Inexact penalty decomposition methods for optimization problems with geometric constraints, A portmanteau local feature discrimination approach to the classification with high-dimensional matrix-variate data, Algorithmic Regularization in Model-Free Overparametrized Asymmetric Matrix Factorization, Accelerated matrix completion algorithm using continuation strategy and randomized SVD, Optimal Algorithms for Stochastic Complementary Composite Minimization, Sparse Reduced Rank Huber Regression in High Dimensions, The low-rank approximation of fourth-order partial-symmetric and conjugate partial-symmetric tensor, A singular value shrinkage thresholding algorithm for folded concave penalized low-rank matrix optimization problems, High-dimensional estimation of quadratic variation based on penalized realized variance, Further results on tensor nuclear norms, Smoothing fast proximal gradient algorithm for the relaxation of matrix rank regularization problem, Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation, Modewise operators, the tensor restricted isometry property, and low-rank tensor recovery, Sparsity-Inducing Nonconvex Nonseparable Regularization for Convex Image Processing, A Corrected Tensor Nuclear Norm Minimization Method for Noisy Low-Rank Tensor Completion, PCA Sparsified, Low-rank matrix estimation via nonconvex optimization methods in multi-response errors-in-variables regression, A dual basis approach to multidimensional scaling, A survey on compressed sensing approach to systems and control, Nonnegative Low Rank Matrix Completion by Riemannian Optimalization Methods, Inexact generalized ADMM with relative error criteria for linearly constrained convex optimization problems, Expectile trace regression via low-rank and group sparsity regularization


Uses Software