GECCO '20: Proceedings of the 2020 Genetic and Evolutionary Computation Conference

GECCO '20: Proceedings of the 2020 Genetic and Evolutionary Computation Conference

Full Citation in the ACM Digital Library

SESSION: Evolutionary numerical optimization

Leveraging conditional linkage models in gray-box optimization with the real-valued gene-pool optimal mixing evolutionary algorithm

  • Anton Bouter
  • Stefanus C. Maree
  • Tanja Alderliesten
  • Peter A. N. Bosman

Often, real-world problems are of the gray-box type. It has been shown that the Real-Valued Gene-pool Optimal Mixing Evolutionary Algorithm (RV-GOMEA) is in principle capable of exploiting such a Gray-Box Optimization (GBO) setting using linkage models that capture dependencies between problem variables, resulting in excellent performance and scalability on both benchmark and real-world problems that allow for partial evaluations. However, linkage models proposed for RV-GOMEA so far cannot explicitly capture overlapping dependencies. Consequently, performance degrades if such dependencies exist. In this paper, we therefore introduce various ways of using conditional linkage models in RV-GOMEA. Their use is compared to that of non-conditional models, and to VkD-CMA, whose performance is among the state of the art, on various benchmark problems with overlapping dependencies. We find that RV-GOMEA with conditional linkage models achieves the best scalability on most problems, with conditional models leading to similar or better performance than non-conditional models. We conclude that the introduction of conditional linkage models to RV-GOMEA is an important contribution, as it expands the set of problems for which optimization in a GBO setting results in substantially improved performance and scalability. In future work, conditional linkage models may prove to benefit the optimization of real-world problems.

Distributed random walks for fitness landscape analysis

  • Ryan Dieter Lang
  • Andries Petrus Engelbrecht

Fitness landscape analysis is used to mathematically characterize optimization problems. In order to perform fitness landscape analysis on continuous-valued optimization problems, a sample of the fitness landscape needs to be taken. A common way to perform this sampling is to use random walk algorithms. This paper proposes a new random walk algorithm for continuous-valued optimization problems, called the distributed random walk algorithm. The algorithm is based on the premise that multiple short random walks of the same type will provide better coverage of the decision space and more robust fitness landscape measures than a single long random walk. The distributed random walk algorithm is simple to implement, and the computational overhead is insignificant compared to random walk algorithms in the literature. The results of the study indicate that the distributed random walk algorithm achieves both of these objectives. Furthermore, the benefits of the distributed random walk algorithm are shown to be much more significant when small step sizes are used in the random walks.

Versatile black-box optimization

  • Jialin Liu
  • Antoine Moreau
  • Mike Preuss
  • Jeremy Rapin
  • Baptiste Roziere
  • Fabien Teytaud
  • Olivier Teytaud

Choosing automatically the right algorithm using problem descriptors is a classical component of combinatorial optimization. It is also a good tool for making evolutionary algorithms fast, robust and versatile. We present Shiwa, an algorithm good at both discrete and continuous, noisy and noise-free, sequential and parallel, black-box optimization. Our algorithm is experimentally compared to competitors on YABBOB, a BBOB comparable testbed, and on some variants of it, and then validated on several real world testbeds.

A surrogate-assisted metaheuristic for bilevel optimization

  • Jesús-Adolfo Mejía-de-Dios
  • Efrén Mezura-Montes

A Bilevel Optimization Problem (BOP) is related to two optimization problems in a hierarchical structure. A BOP is solved when an optimum of the upper level problem is found, subject to the optimal response of the respective lower level problem. This paper presents a metaheuristic method assisted by a kernel interpolation numerical technique to approximate optimal solutions of a BOP. Two surrogate methods approximate upper and lower level objective functions on solutions obtained by a population-based algorithm adapted to save upper level objective function evaluations. Some theoretical properties about kernel interpolation are used to study global convergence in some particular BOPs. The empirical results of this approach are analyzed when representative test functions for bilevel optimization are solved. The overall performance provided by the proposal is competitive.

Deep generative model for non-convex constraint handling

  • Naoki Sakamoto
  • Eiji Semmatsu
  • Kazuto Fukuchi
  • Jun Sakuma
  • Youhei Akimoto

In this study, we consider black-box minimization problems with non-convex constraints, where the constraints are significantly cheaper to evaluate than the objective. Non-convex constraints generally make it difficult to solve problems using evolutionary approaches. In this paper, we revisit a conventional technique called decoder constraint handling, which transforms a feasible non-convex domain into an easy-to-control convex set. This approach is promising because it transforms a constrained problem into an almost unconstrained one. However, its application has been considerably limited, because designing or training such a nonlinear decoder requires domain knowledge or manually prepared training data. To fully automate the decoder design, we use deep generative models. We propose a novel scheme to train a deep generative model without using manually prepared training data. For this purpose, we first train feasible solution samplers, which are deep neural networks, using the constraint functions. Subsequently, we train another deep generative model using the data generated from the trained samplers as the training data. The proposed framework is applied to tasks inspired by topology optimization problems. The empirical study demonstrates that the proposed approach can locate better solutions with fewer objective function evaluations than the existing approach.

Analyzing adaptive parameter landscapes in parameter adaptation methods for differential evolution

  • Ryoji Tanabe

Since the scale factor and the crossover rate significantly influence the performance of differential evolution (DE), parameter adaptation methods (PAMs) for the two parameters have been well studied in the DE community. Although PAMs can sufficiently improve the effectiveness of DE, PAMs are poorly understood (e.g., the working principle of PAMs). One of the difficulties in understanding PAMs comes from the unclarity of the parameter space that consists of the scale factor and the crossover rate. This paper addresses this issue by analyzing adaptive parameter landscapes in PAMs for DE. First, we propose a concept of an adaptive parameter landscape, which captures a moment in a parameter adaptation process. For each iteration, each individual in the population has its adaptive parameter landscape. Second, we propose a method of analyzing adaptive parameter landscapes using a 1-step-lookahead greedy improvement metric. Third, we examine adaptive parameter landscapes in three PAMs by using the proposed method. Results provide insightful information about PAMs in DE.

Towards dynamic algorithm selection for numerical black-box optimization: investigating BBOB as a use case

  • Diederick Vermetten
  • Hao Wang
  • Thomas Bäck
  • Carola Doerr

One of the most challenging problems in evolutionary computation is to select from its family of diverse solvers one that performs well on a given problem. This algorithm selection problem is complicated by the fact that different phases of the optimization process require different search behavior. While this can partly be controlled by the algorithm itself, there exist large differences between algorithm performance. It can therefore be beneficial to swap the configuration or even the entire algorithm during the run. Long deemed impractical, recent advances in Machine Learning and in exploratory landscape analysis give hope that this dynamic algorithm configuration (dynAC) can eventually be solved by automatically trained configuration schedules. With this work we aim at promoting research on dynAC, by introducing a simpler variant that focuses only on switching between different algorithms, not configurations. Using the rich data from the Black Box Optimization Benchmark (BBOB) platform, we show that even single-switch dynamic Algorithm selection (dynAS) can potentially result in significant performance gains. We also discuss key challenges in dynAS, and argue that the BBOB-framework can become a useful tool in overcoming these.