Self-adjusting evolutionary algorithms for multimodal optimization
From MaRDI portal
Publication:2144276
Abstract: Recent theoretical research has shown that self-adjusting and self-adaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces. However, the vast majority of these studies focuses on unimodal functions which do not require the algorithm to flip several bits simultaneously to make progress. In fact, existing self-adjusting algorithms are not designed to detect local optima and do not have any obvious benefit to cross large Hamming gaps. We suggest a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms (both with and without prior self-adjusting algorithms). Added to a simple (1+1) EA, we prove an expected runtime on the well-known Jump benchmark that corresponds to an asymptotically optimal parameter setting and outperforms other mechanisms for multimodal optimization like heavy-tailed mutation. We also investigate the module in the context of a self-adjusting (1+) EA and show that it combines the previous benefits of this algorithm on unimodal problems with more efficient multimodal optimization. To explore the limitations of the approach, we additionally present an example where both self-adjusting mechanisms, including stagnation detection, do not help to find a beneficial setting of the mutation rate. Finally, we investigate our module for stagnation detection experimentally.
Recommendations
Cites work
- scientific article; zbMATH DE number 5686753 (Why is no real title available?)
- A tight runtime analysis for the \((1+(\lambda,\lambda))\) GA on LeadingOnes
- Adaptive population models for offspring populations and parallel evolutionary algorithms
- Analyzing evolutionary algorithms. The computer science perspective.
- Artificial immune systems can find arbitrarily good approximations for the NP-hard number partitioning problem
- Bioinspired computation in combinatorial optimization. Algorithms and their computational complexity
- Hitting-time and occupation-time bounds implied by drift analysis with applications
- On the analysis of the \((1+1)\) evolutionary algorithm
- Optimal static and self-adjusting parameter choices for the (1+( , )) genetic algorithm
- Population size versus runtime of a simple evolutionary algorithm
- Runtime analysis for self-adaptive mutation rates
- Stagnation detection with randomized local search
- Static and self-adjusting mutation strengths for multi-valued decision variables
- The \((1+\lambda)\) evolutionary algorithm with self-adjusting mutation rate
- The benefits and limitations of voting mechanisms in evolutionary optimisation
- Theory of evolutionary computation. Recent developments in discrete optimization
- Tight bounds on the optimization time of a randomized search heuristic on linear functions
Cited in
(11)- scientific article; zbMATH DE number 1984139 (Why is no real title available?)
- Choosing the right algorithm with hints from complexity theory
- Lazy parameter tuning and control: choosing all parameters randomly from a power-law distribution
- Lower bounds from fitness levels made easy
- An extended jump functions benchmark for the analysis of randomized search heuristics
- Runtime analysis for permutation-based evolutionary algorithms
- Stagnation detection in highly multimodal fitness landscapes
- Auto-tuning strategy for evolutionary algorithms: Balancing between exploration and exploitation
- Self-adjusting offspring population sizes outperform fixed parameters on the Cliff function
- Stagnation detection meets fast mutation
- Decomposition-based multiobjective optimization for nonlinear equation systems with many and infinitely many roots
This page was built for publication: Self-adjusting evolutionary algorithms for multimodal optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2144276)