Gradient descent with random initialization: fast global convergence for nonconvex phase retrieval

From MaRDI portal
Publication:2425162

DOI10.1007/s10107-019-01363-6zbMath1415.90086arXiv1803.07726OpenAlexW3123272904WikidataQ128449469 ScholiaQ128449469MaRDI QIDQ2425162

Yuxin Chen, Yuejie Chi, Cong Ma, Jianqing Fan

Publication date: 26 June 2019

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1803.07726



Related Items

Multicategory Angle-Based Learning for Estimating Optimal Dynamic Treatment Regimes With Censored Data, The numerics of phase retrieval, Role of sparsity and structure in the optimization landscape of non-convex matrix sensing, Tensor factorization recommender systems with dependency, Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity, Nonconvex Low-Rank Tensor Completion from Noisy Data, Sparse signal recovery from phaseless measurements via hard thresholding pursuit, Anisotropic Diffusion in Consensus-Based Optimization on the Sphere, Sharp global convergence guarantees for iterative nonconvex optimization with random data, Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution, Convex and Nonconvex Optimization Are Both Minimax-Optimal for Noisy Blind Deconvolution Under Random Designs, Fast gradient method for low-rank matrix estimation, Unnamed Item, Provable Phase Retrieval with Mirror Descent, Near-optimal bounds for generalized orthogonal Procrustes problem via generalized power method, Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization, Recent Theoretical Advances in Non-Convex Optimization, Complex phase retrieval from subgaussian measurements, Median-Truncated Gradient Descent: A Robust and Scalable Nonconvex Approach for Signal Estimation, Exact Recovery of Multichannel Sparse Blind Deconvolution via Gradient Descent, Homomorphic sensing of subspace arrangements, Unnamed Item, A selective overview of deep learning, Bridging convex and nonconvex optimization in robust PCA: noise, outliers and missing data, Consensus-based optimization on hypersurfaces: Well-posedness and mean-field limit, Spectral method and regularized MLE are both optimal for top-\(K\) ranking, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, On the geometric analysis of a quartic-quadratic optimization problem under a spherical constraint, On the Convergence of Mirror Descent beyond Stochastic Convex Programming, Unnamed Item, Asymptotic Properties of Stationary Solutions of Coupled Nonconvex Nonsmooth Empirical Risk Minimization, Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in \(O(\sqrt{n})\) iterations


Uses Software


Cites Work