Improving exploration strategies in large dimensions and rate of convergence of global random search algorithms

From MaRDI portal
Publication:6183083

DOI10.1007/S10898-023-01308-6arXiv2208.11542OpenAlexW4385225555MaRDI QIDQ6183083FDOQ6183083


Authors: Jack Noonan, A. Zhigljavsky Edit this on Wikidata


Publication date: 26 January 2024

Published in: Journal of Global Optimization (Search for Journal in Brave)

Abstract: We consider global optimization problems, where the feasible region X is a compact subset of mathbbRd with dgeq10. For these problems, we demonstrate the following. First: the actual convergence of global random search algorithms is much slower than that given by the classical estimates, based on the asymptotic properties of random points. Second: the usually recommended space exploration schemes are inefficient in the non-asymptotic regime. Specifically, (a) uniform sampling on entire~X is much less efficient than uniform sampling on a suitable subset of X, and (b) the effect of replacement of random points by low-discrepancy sequences is negligible.


Full work available at URL: https://arxiv.org/abs/2208.11542




Recommendations




Cites Work






This page was built for publication: Improving exploration strategies in large dimensions and rate of convergence of global random search algorithms

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6183083)