Randomized subspace gradient method for constrained optimization

From MaRDI portal
Publication:6442909

arXiv2307.03335MaRDI QIDQ6442909FDOQ6442909

Pierre-Louis Poirion, Akiko Takeda, Ryota Nozawa

Publication date: 6 July 2023

Abstract: We propose randomized subspace gradient methods for high-dimensional constrained optimization. While there have been similarly purposed studies on unconstrained optimization problems, there have been few on constrained optimization problems due to the difficulty of handling constraints. Our algorithms project gradient vectors onto a subspace that is a random projection of the subspace spanned by the gradients of active constraints. We determine the worst-case iteration complexity under linear and nonlinear settings and theoretically confirm that our algorithms can take a larger step size than their deterministic version. From the advantages of taking longer step and randomized subspace gradients, we show that our algorithms are especially efficient in view of time complexity when gradients cannot be obtained easily. Numerical experiments show that they tend to find better solutions because of the randomness of their subspace selection. Furthermore, they performs well in cases where gradients could not be obtained directly, and instead, gradients are obtained using directional derivatives.













This page was built for publication: Randomized subspace gradient method for constrained optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6442909)