Gradient descent with random initialization: fast global convergence for nonconvex phase retrieval

From MaRDI portal
Revision as of 21:02, 2 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2425162

DOI10.1007/S10107-019-01363-6zbMath1415.90086arXiv1803.07726OpenAlexW3123272904WikidataQ128449469 ScholiaQ128449469MaRDI QIDQ2425162

Yuxin Chen, Yuejie Chi, Cong Ma, Jianqing Fan

Publication date: 26 June 2019

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1803.07726




Related Items (32)

Multicategory Angle-Based Learning for Estimating Optimal Dynamic Treatment Regimes With Censored DataThe numerics of phase retrievalRole of sparsity and structure in the optimization landscape of non-convex matrix sensingTensor factorization recommender systems with dependencyRevisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to InfinityNonconvex Low-Rank Tensor Completion from Noisy DataSparse signal recovery from phaseless measurements via hard thresholding pursuitAnisotropic Diffusion in Consensus-Based Optimization on the SphereSharp global convergence guarantees for iterative nonconvex optimization with random dataImplicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolutionConvex and Nonconvex Optimization Are Both Minimax-Optimal for Noisy Blind Deconvolution Under Random DesignsFast gradient method for low-rank matrix estimationUnnamed ItemProvable Phase Retrieval with Mirror DescentNear-optimal bounds for generalized orthogonal Procrustes problem via generalized power methodNoisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex OptimizationRecent Theoretical Advances in Non-Convex OptimizationComplex phase retrieval from subgaussian measurementsMedian-Truncated Gradient Descent: A Robust and Scalable Nonconvex Approach for Signal EstimationExact Recovery of Multichannel Sparse Blind Deconvolution via Gradient DescentHomomorphic sensing of subspace arrangementsUnnamed ItemA selective overview of deep learningBridging convex and nonconvex optimization in robust PCA: noise, outliers and missing dataConsensus-based optimization on hypersurfaces: Well-posedness and mean-field limitSpectral method and regularized MLE are both optimal for top-\(K\) rankingLow-rank matrix recovery with composite optimization: good conditioning and rapid convergenceOn the geometric analysis of a quartic-quadratic optimization problem under a spherical constraintOn the Convergence of Mirror Descent beyond Stochastic Convex ProgrammingUnnamed ItemAsymptotic Properties of Stationary Solutions of Coupled Nonconvex Nonsmooth Empirical Risk MinimizationRandomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in \(O(\sqrt{n})\) iterations


Uses Software



Cites Work




This page was built for publication: Gradient descent with random initialization: fast global convergence for nonconvex phase retrieval