On Global Linear Convergence in Stochastic Nonconvex Optimization for Semidefinite Programming
From MaRDI portal
Publication:5238975
DOI10.1109/TSP.2019.2925609OpenAlexW2957696227MaRDI QIDQ5238975FDOQ5238975
Publication date: 28 October 2019
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tsp.2019.2925609
Cited In (5)
- Sharp global convergence guarantees for iterative nonconvex optimization with random data
- A comparison of global and semi-local approximation in \(T\)-stage stochastic optimization
- On the Global Convergence of Randomized Coordinate Gradient Descent for Nonconvex Optimization
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
- A linearly convergent stochastic recursive gradient method for convex optimization
This page was built for publication: On Global Linear Convergence in Stochastic Nonconvex Optimization for Semidefinite Programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5238975)