Convexification with bounded gap for randomly projected quadratic optimization

From MaRDI portal
Publication:5080508

DOI10.1137/21M1433678zbMATH Open1493.90122arXiv2107.05272OpenAlexW4298096160MaRDI QIDQ5080508FDOQ5080508


Authors: Terunari Fuji, Pierre-Louis Poirion, Akiko Takeda Edit this on Wikidata


Publication date: 31 May 2022

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Abstract: Random projection techniques based on Johnson-Lindenstrauss lemma are used for randomly aggregating the constraints or variables of optimization problems while approximately preserving their optimal values, that leads to smaller-scale optimization problems. D'Ambrosio et al. have applied random projection to a quadratic optimization problem so as to decrease the number of decision variables. Although the problem size becomes smaller, the projected problem will also almost surely be non-convex if the original problem is non-convex, and hence will be hard to solve. In this paper, by focusing on the fact that the level of the non-convexity of a non-convex quadratic optimization problem can be alleviated by random projection, we find an approximate global optimal value of the problem by attributing it to a convex problem with smaller size. To the best of our knowledge, our paper is the first to use random projection for convexification of non-convex optimization problems. We evaluate the approximation error between optimum values of a non-convex optimization problem and its convexified randomly projected problem.


Full work available at URL: https://arxiv.org/abs/2107.05272




Recommendations




Cites Work


Cited In (3)

Uses Software





This page was built for publication: Convexification with bounded gap for randomly projected quadratic optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5080508)