Efficient DC Algorithm for Constrained Sparse Optimization
From MaRDI portal
Publication:6282579
arXiv1701.08498MaRDI QIDQ6282579FDOQ6282579
Jun-Ya Gotoh, Akiko Takeda, Katsuya Tono
Publication date: 30 January 2017
Abstract: We address the minimization of a smooth objective function under an -constraint and simple convex constraints. When the problem has no constraints except the -constraint, some efficient algorithms are available; for example, Proximal DC (Difference of Convex functions) Algorithm (PDCA) repeatedly evaluates closed-form solutions of convex subproblems, leading to a stationary point of the -constrained problem. However, when the problem has additional convex constraints, they become inefficient because it is difficult to obtain closed-form solutions of the associated subproblems. In this paper, we reformulate the problem by employing a new DC representation of the -constraint, so that PDCA can retain the efficiency by reducing its subproblems to the projection operation onto a convex set. Moreover, inspired by the Nesterov's acceleration technique for proximal methods, we propose the Accelerated PDCA (APDCA), which attains the optimal convergence rate if applied to convex programs, and performs well in numerical experiments.
This page was built for publication: Efficient DC Algorithm for Constrained Sparse Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6282579)