Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems

From MaRDI portal
Publication:5375972

DOI10.1142/S0219530518500082zbMATH Open1395.68331arXiv1709.00982OpenAlexW2963367716MaRDI QIDQ5375972FDOQ5375972


Authors: Qin Fang, Min Xu, Yiming Ying Edit this on Wikidata


Publication date: 17 September 2018

Published in: Analysis and Applications (Search for Journal in Brave)

Abstract: The problem of minimizing a separable convex function under linearly coupled constraints arises from various application domains such as economic systems, distributed control, and network flow. The main challenge for solving this problem is that the size of data is very large, which makes usual gradient-based methods infeasible. Recently, Necoara, Nesterov, and Glineur [Journal of Optimization Theory and Applications, 173 (2017) 227-2254] proposed an efficient randomized coordinate descent method to solve this type of optimization problems and presented an appealing convergence analysis. In this paper, we develop new techniques to analyze the convergence of such algorithms, which are able to greatly improve the results presented there. This refined result is achieved by extending Nesterov's second technique developed by Nesterov [SIAM J. Optim. 22 (2012) 341-362] to the general optimization problems with linearly coupled constraints. A novel technique in our analysis is to establish the basis vectors for the subspace of the linearly constraints.


Full work available at URL: https://arxiv.org/abs/1709.00982




Recommendations




Cites Work


Cited In (7)





This page was built for publication: Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5375972)