GECCO '19- Proceedings of the Genetic and Evolutionary Computation Conference

GECCO '19- Proceedings of the Genetic and Evolutionary Computation Conference

Full Citation in the ACM Digital Library

SESSION: Theory

Runtime analysis of randomized search heuristics for dynamic graph coloring

  • Jakob Bossek
  • Frank Neumann
  • Pan Peng
  • Dirk Sudholt

We contribute to the theoretical understanding of randomized search heuristics for dynamic problems. We consider the classical graph coloring problem and investigate the dynamic setting where edges are added to the current graph. We then analyze the expected time for randomized search heuristics to recompute high quality solutions. This includes the (1+1) EA and RLS in a setting where the number of colors is bounded and we are minimizing the number of conflicts as well as iterated local search algorithms that use an unbounded color palette and aim to use the smallest colors and - as a consequence - the smallest number of colors.

We identify classes of bipartite graphs where reoptimization is as hard as or even harder than optimization from scratch, i. e. starting with a random initialization. Even adding a single edge can lead to hard symmetry problems. However, graph classes that are hard for one algorithm turn out to be easy for others. In most cases our bounds show that reoptimization is faster than optimizing from scratch. Furthermore, we show how to speed up computations by using problem specific operators concentrating on parts of the graph where changes have occurred.

On the benefits of populations for the exploitation speed of standard steady-state genetic algorithms

  • Dogan Corus
  • Pietro S. Oliveto

It is generally accepted that populations are useful for the global exploration of multi-modal optimisation problems. Indeed, several theoretical results are available showing such advantages over single-trajectory search heuristics. In this paper we provide evidence that evolving populations via crossover and mutation may also benefit the optimisation time for hillclimbing unimodal functions. In particular, we prove bounds on the expected runtime of the standard (µ+1) GA for OneMax that are lower than its unary black box complexity and decrease in the leading constant with the population size up to [MATH HERE]. Our analysis suggests that the optimal mutation strategy is to flip two bits most of the time. To achieve the results we provide two interesting contributions to the theory of randomised search heuristics: 1) A novel application of drift analysis which compares absorption times of different Markov chains without defining an explicit potential function. 2) The inversion of fundamental matrices to calculate the absorption times of the Markov chains. The latter strategy was previously proposed in the literature but to the best of our knowledge this is the first time is has been used to show non-trivial bounds on expected runtimes.

The efficiency threshold for the offspring population size of the (µ, λ) EA

  • Denis Antipov
  • Benjamin Doerr
  • Quentin Yang

Understanding when evolutionary algorithms are efficient or not, and how they efficiently solve problems, is one of the central research tasks in evolutionary computation. In this work, we make progress in understanding the interplay between parent and offspring population size of the (µ, λ) EA. Previous works, roughly speaking, indicate that for λ ≥ (1 + ε), this EA easily optimizes the OneMax function, whereas an offspring population size λ ≤ (1 - ε) leads to an exponential runtime.

Motivated also by the observation that in the efficient regime the (µ, λ) EA loses its ability to escape local optima, we take a closer look into this phase transition. Among other results, we show that when µn1/2-c for any constant c > 0, then for any λ we have a super-polynomial runtime. However, if µn2/3+c, then for any λ, the runtime is polynomial. For the latter result we observe that the (µ, λ) EA profits from better individuals also because these, by creating slightly worse offspring, stabilize slightly sub-optimal sub-populations. While these first results close to the phase transition do not yet give a complete picture, they indicate that the boundary between efficient and super-polynomial is not just the line λ = , and that the reasons for efficiency or not are more complex than what was known so far.

Multiplicative up-drift

  • Benjamin Doerr
  • Timo Kötzing

Drift analysis aims at translating the expected progress of an evolutionary algorithm (or more generally, a random process) into a probabilistic guarantee on its run time (hitting time). So far, drift arguments have been successfully employed in the rigorous analysis of evolutionary algorithms, however, only for the situation that the progress is constant or becomes weaker when approaching the target.

Motivated by questions like how fast fit individuals take over a population, we analyze random processes exhibiting a multiplicative growth in expectation. We prove a drift theorem translating this expected progress into a hitting time. This drift theorem gives a simple and insightful proof of the level-based theorem first proposed by Lehre (2011). Our version of this theorem has, for the first time, the best-possible linear dependence on the growth parameter δ (the previous-best was quadratic). This gives immediately stronger run time guarantees for a number of applications.

Self-adjusting mutation rates with provably optimal success rules

  • Benjamin Doerr
  • Carola Doerr
  • Johannes Lengler

The one-fifth success rule is one of the best-known and most widely accepted techniques to control the parameters of evolutionary algorithms. While it is often applied in the literal sense, a common interpretation sees the one-fifth success rule as a family of success-based updated rules that are determined by an update strength F and a success rate s. We analyze in this work how the performance of the (1+1) Evolutionary Algorithm (EA) on LeadingOnes depends on these two hyper-parameters. Our main result shows that the best performance is obtained for small update strengths F = 1+o(1) and success rate 1/e. We also prove that the runtime obtained by this parameter setting is asymptotically optimal among all dynamic choices of the mutation rate for the (1+1) EA, up to lower order error terms. We show similar results for the resampling variant of the (1+1) EA, which enforces to flip at least one bit per iteration.

A tight runtime analysis for the cGA on jump functions: EDAs can cross fitness valleys at no extra cost

  • Benjamin Doerr

We prove that the compact genetic algorithm (cGA) with hypothetical population size [MATH HERE] poly(n) with high probability finds the optimum of any n-dimensional jump function with jump size [MATH HERE] ln n in [MATH HERE] iterations. Since it is known that the cGA with high probability needs at least [MATH HERE] iterations to optimize the unimodal OneMax function, our result shows that the cGA in contrast to most classic evolutionary algorithms here is able to cross mo derate-sized valleys of low fitness at no extra cost.

Our runtime guarantee improves over the recent upper bound O(µn1.5 log n) valid for µ = Ω(n3.5+ε) of Hasenöhrl and Sutton (GECCO 2018). For the best choice of the hypothetical population size, this result gives a runtime guarantee of O(n5+ε), whereas ours gives O(n log n).

We also provide a simple general method based on parallel runs that, under mild conditions, (i) overcomes the need to specify a suitable population size, but gives a performance close to the one stemming from the best-possible population size, and (ii) transforms EDAs with high-probability performance guarantees into EDAs with similar bounds on the expected runtime.

Runtime analysis of the univariate marginal distribution algorithm under low selective pressure and prior noise

  • Per Kristian Lehre
  • Phan Trung Hai Nguyen

We perform a rigorous runtime analysis for the Univariate Marginal Distribution Algorithm on the LeadingOnes function, a well-known benchmark function in the theory community of evolutionary computation with a high correlation between decision variables. For a problem instance of size n, the currently best known upper bound on the expected runtime is O (nλ log λ + n2) (Dang and Lehre, GECCO 2015), while a lower bound necessary to understand how the algorithm copes with variable dependencies is still missing. Motivated by this, we show that the algorithm requires a eΩ(µ) runtime with high probability and in expectation if the selective pressure is low; otherwise, we obtain a lower bound of [MATH HERE] on the expected runtime. Furthermore, we for the first time consider the algorithm on the function under a prior noise model and obtain an O(n2) expected runtime for the optimal parameter settings. In the end, our theoretical results are accompanied by empirical findings, not only matching with rigorous analyses but also providing new insights into the behaviour of the algorithm.

Improved runtime results for simple randomised search heuristics on linear functions with a uniform constraint

  • Frank Neumann
  • Mojgan Pourhassan
  • Carsten Witt

In the last decade remarkable progress has been made in development of suitable proof techniques for analysing randomised search heuristics. The theoretical investigation of these algorithms on classes of functions is essential to the understanding of the underlying stochastic process. Linear functions have been traditionally studied in this area resulting in tight bounds on the expected optimisation time of simple randomised search algorithms for this class of problems. Recently, the constrained version of this problem has gained attention and some theoretical results have also been obtained on this class of problems. In this paper we study the class of linear functions under uniform constraint and investigate the expected optimisation time of Randomised Local Search (RLS) and a simple evolutionary algorithm called (1+1) EA. We prove a tight bound of Θ(n2) for RLS and improve the previously best known bound of (1+1) EA from O(n2 log(Bwmax)) to O(n2 log B) in expectation and to O(n2 log n) with high probability, where wmax and B are the maximum weight of the linear objective function and the bound of the uniform constraint, respectively.

Lower bounds on the runtime of crossover-based algorithms via decoupling and family graphs

  • Andrew M. Sutton
  • Carsten Witt

The runtime analysis of evolutionary algorithms using crossover as search operator has recently produced remarkable results indicating benefits and drawbacks of crossover and illustrating its working principles. Virtually all these results are restricted to upper bounds on the running time of the crossover-based algorithms. This work addresses this lack of lower bounds and rigorously bounds the optimization time of simple algorithms using uniform crossover on the search space {0, 1}n from below via two novel techniques called decoupling and family graphs. First, a simple steady-state crossover-based evolutionary algorithm without selection pressure is analyzed and shown that after O(µ log µ) generations, bit positions are sampled almost independently with marginal probabilities corresponding to the fraction of one-bits at the corresponding position in the initial population. Afterwards, a crossover-based algorithm using tournament selection is analyzed by a novel generalization of the family tree technique originally introduced for mutation-only EAs. Using these so-called family graphs, almost tight lower bounds on the optimization time on the OneMax benchmark function are shown.