Parallel stochastic gradient algorithms for large-scale matrix completion
From MaRDI portal
Publication:2392935
DOI10.1007/s12532-013-0053-8zbMath1275.90039OpenAlexW2105767123MaRDI QIDQ2392935
Benjamin Recht, Christopher Re
Publication date: 5 August 2013
Published in: Mathematical Programming Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s12532-013-0053-8
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Stochastic programming (90C15)
Related Items
Spectral gap in random bipartite biregular graphs and applications, Inexact coordinate descent: complexity and preconditioning, Optimization landscape of Tucker decomposition, 1-bit matrix completion: PAC-Bayesian analysis of a variational approximation, Mixed-Projection Conic Optimization: A New Paradigm for Modeling Rank Constraints, Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization, A mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize, On the Efficiency of Random Permutation for ADMM and Coordinate Descent, On the rates of convergence of parallelized averaged stochastic gradient algorithms, Block mirror stochastic gradient method for stochastic optimization, Random-reshuffled SARAH does not need full gradient computations, T-product factorization based method for matrix and tensor completion problems, A Continuous-Time Analysis of Distributed Stochastic Gradient, Robust principal component analysis using facial reduction, Worst-case complexity of cyclic coordinate descent: \(O(n^2)\) gap with randomized version, Multilevel Stochastic Gradient Methods for Nested Composition Optimization, Why random reshuffling beats stochastic gradient descent, Conditional gradient algorithms for norm-regularized smooth convex optimization, Matrix recipes for hard thresholding methods, The Alternating Descent Conditional Gradient Method for Sparse Inverse Problems, Linear feature transform and enhancement of classification on deep neural network, Numerical comparisons between Bayesian and frequentist low-rank matrix completion: estimation accuracy and uncertainty quantification, Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach, Decentralized and privacy-preserving low-rank matrix completion, A Riemannian gossip approach to subspace learning on Grassmann manifold, Convergence Rate of Incremental Gradient and Incremental Newton Methods, Enabling numerically exact local solver for waveform inversion -- a low-rank approach, Rank $2r$ Iterative Least Squares: Efficient Recovery of Ill-Conditioned Low Rank Matrices from Few Entries, Unnamed Item, Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion, A Bayesian approach for noisy matrix completion: optimal rate under general sampling distribution, Unnamed Item, Low-rank matrix completion via preconditioned optimization on the Grassmann manifold, Approximate matrix completion based on cavity method, Analysis of multiview legislative networks with structured matrix factorization: does Twitter influence translate to the real world?
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Tensor Decompositions and Applications
- Fixed point and Bregman iterative methods for matrix rank minimization
- Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm
- Local minima and convergence in low-rank semidefinite programming
- Exact matrix completion via convex optimization
- A Singular Value Thresholding Algorithm for Matrix Completion
- Interior-Point Method for Nuclear Norm Approximation with Application to System Identification
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Monotone Operators and the Proximal Point Algorithm
- A New Class of Incremental Gradient Methods for Least Squares Problems
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Recovering Low-Rank Matrices From Few Coefficients in Any Basis
- Matrix Completion From a Few Entries
- The Power of Convex Relaxation: Near-Optimal Matrix Completion
- A Simpler Approach to Matrix Completion
- Approximating the Cut-Norm via Grothendieck's Inequality
- Learning Theory
- Signal Recovery by Proximal Forward-Backward Splitting
- Benchmarking optimization software with performance profiles.