Using Automated Algorithm Configuration for Parameter Control
From MaRDI portal
Publication:6202151
Abstract: Dynamic Algorithm Configuration (DAC) tackles the question of how to automatically learn policies to control parameters of algorithms in a data-driven fashion. This question has received considerable attention from the evolutionary community in recent years. Having a good benchmark collection to gain structural understanding on the effectiveness and limitations of different solution methods for DAC is therefore strongly desirable. Following recent work on proposing DAC benchmarks with well-understood theoretical properties and ground truth information, in this work, we suggest as a new DAC benchmark the controlling of the key parameter in the ~Genetic Algorithm for solving OneMax problems. We conduct a study on how to solve the DAC problem via the use of (static) automated algorithm configuration on the benchmark, and propose techniques to significantly improve the performance of the approach. Our approach is able to consistently outperform the default parameter control policy of the benchmark derived from previous theoretical work on sufficiently large problem sizes. We also present new findings on the landscape of the parameter-control search policies and propose methods to compute stronger baselines for the benchmark via numerical approximations of the true optimal policies.
Cites work
- A deep reinforcement learning based hyper-heuristic for combinatorial optimisation with uncertainties
- A rigorous runtime analysis of the \((1 + (\lambda, \lambda))\) GA on jump functions
- A tight runtime analysis for the \((1+(\lambda,\lambda))\) GA on LeadingOnes
- An experimental study of operator choices in the \((1+(\lambda,\lambda))\) genetic algorithm
- Analyzing bandit-based adaptive operator selection mechanisms
- Automated Dynamic Algorithm Configuration
- Black-box search by unbiased variation
- Capping methods for the automatic configuration of optimization algorithms
- Fast mutation in crossover-based algorithms
- From black-box complexity to designing new genetic algorithms
- Learning probability distributions in continuous evolutionary algorithms -- a comparative review
- Optimal static and self-adjusting parameter choices for the \((1+(\lambda ,\lambda ))\) genetic algorithm
- Playing mastermind with constant-size memory
This page was built for publication: Using Automated Algorithm Configuration for Parameter Control
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6202151)