The q-gradient method for global optimization

From MaRDI portal
Publication:6235563

DOI10.1063/1.4826022arXiv1209.2084MaRDI QIDQ6235563FDOQ6235563


Authors: Aline C. Soterroni, Roberto Luiz Galski, Fernando M. Ramos Edit this on Wikidata


Publication date: 10 September 2012

Abstract: The q-gradient is an extension of the classical gradient vector based on the concept of Jackson's derivative. Here we introduce a preliminary version of the q-gradient method for unconstrained global optimization. The main idea behind our approach is the use of the negative of the q-gradient of the objective function as the search direction. In this sense, the method here proposed is a generalization of the well-known steepest descent method. The use of Jackson's derivative has shown to be an effective mechanism for escaping from local minima. The q-gradient method is complemented with strategies to generate the parameter q and to compute the step length in a way that the search process gradually shifts from global in the beginning to almost local search in the end. For testing this new approach, we considered six commonly used test functions and compared our results with three Genetic Algorithms (GAs) considered effective in optimizing multidimensional unimodal and multimodal functions. For the multimodal test functions, the q-gradient method outperformed the GAs, reaching the minimum with a better accuracy and with less function evaluations.













This page was built for publication: The q-gradient method for global optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6235563)