The (1+$Ξ»$) Evolutionary Algorithm with Self-Adjusting Mutation Rate
April 07, 2017 Β· Declared Dead Β· π Annual Conference on Genetic and Evolutionary Computation
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Benjamin Doerr, Christian GieΓen, Carsten Witt, Jing Yang
arXiv ID
1704.02191
Category
cs.NE: Neural & Evolutionary
Citations
76
Venue
Annual Conference on Genetic and Evolutionary Computation
Last Checked
2 months ago
Abstract
We propose a new way to self-adjust the mutation rate in population-based evolutionary algorithms in discrete search spaces. Roughly speaking, it consists of creating half the offspring with a mutation rate that is twice the current mutation rate and the other half with half the current rate. The mutation rate is then updated to the rate used in that subpopulation which contains the best offspring. We analyze how the $(1+Ξ»)$ evolutionary algorithm with this self-adjusting mutation rate optimizes the OneMax test function. We prove that this dynamic version of the $(1+Ξ»)$ EA finds the optimum in an expected optimization time (number of fitness evaluations) of $O(nΞ»/\logΞ»+n\log n)$. This time is asymptotically smaller than the optimization time of the classic $(1+Ξ»)$ EA. Previous work shows that this performance is best-possible among all $Ξ»$-parallel mutation-based unbiased black-box algorithms. This result shows that the new way of adjusting the mutation rate can find optimal dynamic parameter values on the fly. Since our adjustment mechanism is simpler than the ones previously used for adjusting the mutation rate and does not have parameters itself, we are optimistic that it will find other applications.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Neural & Evolutionary
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Progressive Growing of GANs for Improved Quality, Stability, and Variation
R.I.P.
π»
Ghosted
Learning both Weights and Connections for Efficient Neural Networks
R.I.P.
π»
Ghosted
LSTM: A Search Space Odyssey
R.I.P.
π»
Ghosted
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
R.I.P.
π»
Ghosted
An Introduction to Convolutional Neural Networks
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted