Log-linear convergence and divergence of the scale-invariant \((1+1)\)-ES in noisy environments
DOI10.1007/s00453-010-9403-3zbMath1230.68212OpenAlexW2075208118MaRDI QIDQ633840
Anne Auger, Mohamed Jebalia, Nikolaus Hansen
Publication date: 30 March 2011
Published in: Algorithmica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00453-010-9403-3
convergenceconvergence ratesMarkov chainsBorel-Cantelli lemmaevolution strategiesnumerical optimizationnoisy optimizationstochastic optimization algorithms
Numerical optimization and variational techniques (65K10) Stochastic programming (90C15) Approximation methods and heuristics in mathematical programming (90C59) Strong limit theorems (60F15) Markov chains (discrete-time Markov processes on discrete state spaces) (60J10) Randomized algorithms (68W20)
Related Items (6)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Markov chains and stochastic stability
- Global convergence for evolution strategies in spherical problems: Some simple proofs and difficulties.
- Convergence results for the (1,\(\lambda\))-SA-ES using the theory of \(\varphi\)-irreducible Markov chains
- A comparison of evolution strategies with other direct search methods in the presence of noise
- Algorithmic analysis of a basic evolutionary algorithm for continuous optimization
- Probability with Martingales
- Foundations of Genetic Algorithms
- Foundations of Genetic Algorithms
This page was built for publication: Log-linear convergence and divergence of the scale-invariant \((1+1)\)-ES in noisy environments