Loading...
 

TOC GECCO 2019 ENUM

GECCO '19- Proceedings of the Genetic and Evolutionary Computation Conference

GECCO '19- Proceedings of the Genetic and Evolutionary Computation Conference

Full Citation in the ACM Digital Library

SESSION: Evolutionary numerical optimization

A global surrogate assisted CMA-ES

  • Nikolaus Hansen

We explore the arguably simplest way to build an effective surrogate fitness model in continuous search spaces. The model complexity is linear or diagonal-quadratic or full quadratic, depending on the number of available data. The model parameters are computed from the Moore-Penrose pseudoinverse. The model is used as a surrogate fitness for CMA-ES if the rank correlation between true fitness and surrogate value of recently sampled data points is high. Otherwise, further samples from the current population are successively added as data to the model. We empirically compare the IPOP scheme of the new model assisted lq-CMA-ES with a variety of previously proposed methods and with a simple portfolio algorithm using SLSQP and CMA-ES. We conclude that a global quadratic model and a simple portfolio algorithm are viable options to enhance CMA-ES. The model building code is available as part of the pycma Python module on Github and PyPI.

Analysis of a meta-ES on a conically constrained problem

  • Michael Hellwig
  • Hans-Georg Beyer

The paper presents the theoretical performance analysis of a hierarchical Evolution Strategy (meta-ES) variant for mutation strength control on a conically constrained problem. Infeasible offspring are repaired by projection onto the boundary of the feasibility region. Closed-form approximations are used for the one-generation progress of the lower-level evolution strategy. An interval that brackets the expected progress over a single isolation period of the meta-ES is derived. Approximate deterministic evolution equations are obtained that characterize the upper-level strategy dynamics. It is shown that the dynamical behavior of the meta-ES is determined by the choice of the mutation strength control parameter. The obtained theoretical results are compared to experiments for assessing the approximation quality.

Large-scale noise-resilient evolution-strategies

  • Oswin Krause

Ranking-based Evolution Strategies (ES) are efficient algorithms for problems where gradient-information is not available or when the gradient is not informative. This makes ES interesting for Reinforcement-Learning (RL). However, in RL the high dimensionality of the search-space, as well as the noise of the simulations make direct adaptation of ES challenging. Noise makes ranking points difficult and a large budget of re-evaluations is needed to maintain a bounded error rate. In this work, the ranked weighting is replaced by a linear weighting function, which results in nearly unbiased stochastic gradient descent (SGD) on the manifold of probability distributions. The approach is theoretically analysed and the algorithm is adapted based on the results of the analysis. It is shown that in the limit of infinite dimensions, the algorithm becomes invariant to smooth monotonous transformations of the objective function. Further, drawing on the theory of SGD, an adaptation of the learning-rates based on the noise-level is proposed at the cost of a second evaluation for every sampled point. It is shown empirically that the proposed method improves on simple ES using Cumulative Step-size Adaptation and ranking. Further, it is shown that the proposed algorithm is more noise-resilient than a ranking-based approach.

Landscape analysis of gaussian process surrogates for the covariance matrix adaptation evolution strategy

  • Zbyněk Pitra
  • Jakub Repický
  • Martin Holeňa

Gaussian processes modeling technique has been shown as a valuable surrogate model for the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) in continuous single-objective black-box optimization tasks, where the optimized function is expensive. In this paper, we investigate how different Gaussian process settings influence the error between the predicted and genuine population ordering in connection with features representing the fitness landscape. Apart from using features for landscape analysis known from the literature, we propose a new set of features based on CMA-ES state variables. We perform the landscape analysis of a large set of data generated using runs of a surrogate-assisted version of the CMA-ES on the noiseless part of the Comparing Continuous Optimisers benchmark function testbed.

Adaptive ranking based constraint handling for explicitly constrained black-box optimization

  • Naoki Sakamoto
  • Youhei Akimoto

A novel explicit constraint handling technique for the covariance matrix adaptation evolution strategy (CMA-ES) is proposed. The proposed constraint handling exhibits two invariance properties. One is the invariance to arbitrary element-wise increasing transformation of the objective and constraint functions. The other is the invariance to arbitrary affine transformation of the search space. The proposed technique virtually transforms a constrained optimization problem into an unconstrained optimization problem by considering an adaptive weighted sum of the ranking of the objective function values and the ranking of the constraint violations that are measured by the Mahalanobis distance between each candidate solution to its projection onto the boundary of the constraints. Simulation results are presented and show that the CMA-ES with the proposed constraint handling exhibits the affine invariance and performs similarly to the CMA-ES on unconstrained counterparts.

Deep reinforcement learning based parameter control in differential evolution

  • Mudita Sharma
  • Alexandros Komninos
  • Manuel López-Ibáñez
  • Dimitar Kazakov

Adaptive Operator Selection (AOS) is an approach that controls discrete parameters of an Evolutionary Algorithm (EA) during the run. In this paper, we propose an AOS method based on Double Deep Q-Learning (DDQN), a Deep Reinforcement Learning method, to control the mutation strategies of Differential Evolution (DE). The application of DDQN to DE requires two phases. First, a neural network is trained offline by collecting data about the DE state and the benefit (reward) of applying each mutation strategy during multiple runs of DE tackling benchmark functions. We define the DE state as the combination of 99 different features and we analyze three alternative reward functions. Second, when DDQN is applied as a parameter controller within DE to a different test set of benchmark functions, DDQN uses the trained neural network to predict which mutation strategy should be applied to each parent at each generation according to the DE state. Benchmark functions for training and testing are taken from the CEC2005 benchmark with dimensions 10 and 30. We compare the results of the proposed DE-DDQN algorithm to several baseline DE algorithms using no online selection, random selection and other AOS methods, and also to the two winners of the CEC2005 competition. The results show that DE-DDQN outperforms the non-adaptive methods for all functions in the test set; while its results are comparable with the last two algorithms.

Mixed-integer benchmark problems for single- and bi-objective optimization

  • Tea Tušar
  • Dimo Brockhoff
  • Nikolaus Hansen

We introduce two suites of mixed-integer benchmark problems to be used for analyzing and comparing black-box optimization algorithms. They contain problems of diverse difficulties that are scalable in the number of decision variables. The bbob-mixint suite is designed by partially discretizing the established BBOB (Black-Box Optimization Benchmarking) problems. The bi-objective problems from the bbob-biobj-mixint suite are, on the other hand, constructed by using the bbob-mixint functions as their separate objectives. We explain the rationale behind our design decisions and show how to use the suites within the COCO (Comparing Continuous Optimizers) platform. Analyzing two chosen functions in more detail, we also provide some unexpected findings about their properties.

A surrogate model assisted (1+1)-ES with increased exploitation of the model

  • Jingyun Yang
  • Dirk V. Arnold

Surrogate models in black-box optimization can be exploited to different degrees. At one end of the spectrum, they can be used to provide inexpensive but inaccurate assessments of the quality of candidate solutions generated by the black-box optimization algorithm. At the other end, optimization of the surrogate model function can be used in the process of generating those candidate solutions themselves. The latter approach more fully exploits the model, but may be more susceptible to systematic model error. This paper examines the effect of the degree of exploitation of the surrogate model in the context of a simple (1 + 1)-ES. First, we analytically derive the potential gain from more fully exploiting surrogate models by using a spherically symmetric test function and a simple model for the error resulting from the use of surrogate models. We then observe the effects of increased exploitation in an evolution strategy employing Gaussian process surrogate models applied to a range of test problems. We find that the gain resulting from more fully exploiting surrogate models can be considerable.