Loading...
 

TOC GECCO 2018 GA

GECCO '18- Proceedings of the Genetic and Evolutionary Computation Conference

GECCO '18- Proceedings of the Genetic and Evolutionary Computation Conference

Full Citation in the ACM Digital Library

SESSION: Genetic algorithms

Analysis of the use of genetic algorithms for indoor localisation via cloud point matching

  • Miguel d. Boland
  • Leandro Soares Indrusiak

A system's ability to precisely locate itself in a known physical environment is key to its capacity to interact with the environment in an intricate manner. The indoor localisation problem has been approached in a variety of ways, ranging from the identification of pre-defined features or topologies to the more general cloud-point matching.

Cloud point matching can be achieved using a variety of algorithms, each with benefits and drawbacks. Recent improvements have focused on the application of genetic algorithms to solve the initial 'global' search for a solution, before refining this solution to a precise position through a non-genetic algorithm. This project aims to demonstrate the inefficiency of genetic algorithms applied to the global search problem for the issue of indoor localisation; this is thought to be caused by the solution space's low dimensionality, solution landscape topology and the inefficacy of crossover operators in the algorithm. Based on our assumptions of map topologies, we conclude that significant redundancies can be found in some purely genetic heuristics and suggest further development of landscape analysis to allow the use of algorithms appropriate to the scenario's complexity.

Fast algorithm for fair comparison of genetic algorithms

  • Chia-Sheng Chen
  • Hung-Wei Hsu
  • Tian-Li Yu

Since numerous genetic algorithms (GAs) are developed every year, GA researchers need a fast algorithm to fairly compare their performances. In this paper, we formalized the performance metric and listed three algorithms to find the right population size for performance comparing in terms of the Number of Fitness Evaluations (NFE). Instead of finding the population nmNFE producing minimum NFE (mNFE), we took the methodology of finding n* which would converge to an arbitrary notion of success with a desired probability p*. Among all three algorithms, the first, the most commonly used bisection method, was proved to be biased and without generality. The second is an unbiased modification of the first with trade-off of more function evaluations. The third, called Greedy Approach Regarding Locality (GARL), is our recommendation, empirically outperforming the second one by an exponential factor. We also analyzed the time complexity of the second and third algorithms, providing the upper bound for an average case. This work could be viewed as a general efficiency-comparing framework to almost all GAs except for parameterless schemes.

Tunneling between plateaus: improving on a state-of-the-art MAXSAT solver using partition crossover

  • Wenxiang Chen
  • Darrell Whitley
  • Renato Tinós
  • Francisco Chicano

There are two important challenges for local search algorithms when applied to Maximal Satisfiability (MAXSAT). 1) Local search spends a great deal of time blindly exploring plateaus in the search space and 2) local search is less effective on application instances. This second problem may be related to local search's inability to exploit problem structure. We propose a genetic recombination operator to address both of these issues. On problems with well defined local optima, partition crossover is able to "tunnel" between local optima to discover new local optima in O(n) time. The PXSAT algorithm combines partition crossover and local search to produce a new way to escape plateaus. Partition crossover locally decomposes the evaluation function for a given instance into independent components, and is guaranteed to find the best solution among an exponential number of candidate solutions in O(n) time. Empirical results on an extensive set of application instances show that the proposed framework substantially improves two of best local search solvers, AdaptG2WSAT and Sparrow, on many application instances. PXSAT combined with AdaptG2WSAT is also able to outperform CCLS, winner of several recent MAXSAT competitions.

Runtime analysis of probabilistic crowding and restricted tournament selection for bimodal optimisation

  • Edgar Covantes Osuna
  • Dirk Sudholt

Many real optimisation problems lead to multimodal domains and so require the identification of multiple optima. Niching methods have been developed to maintain the population diversity, to investigate many peaks in parallel and to reduce the effect of genetic drift. Using rigorous runtime analysis, we analyse for the first time two well known niching methods: probabilistic crowding and restricted tournament selection (RTS). We incorporate both methods into a (μ+1) EA on the bimodal function TwoMax where the goal is to find two optima at opposite ends of the search space. In probabilistic crowding, the offspring compete with their parents and the survivor is chosen proportionally to its fitness. On TwoMax probabilistic crowding fails to find any reasonable solution quality even in exponential time. In RTS the offspring compete against the closest individual amongst w (window size) individuals. We prove that RTS fails if w is too small, leading to exponential times with high probability. However, if w is chosen large enough, it finds both optima for TwoMax in time On log n) with high probability. Our theoretical results are accompanied by experimental studies that match the theoretical results and also shed light on parameters not covered by the theoretical results.

Shaper-GA: automatic shape generation for modular house design

  • Bruno Taborda
  • Ana de Almeida
  • Filipe Santos
  • Sara Eloy
  • Krystian Kwiecinski

This work presents a Genetic Algorithm (GA) approach to produce automatic designs for modular houses: Shaper-GA. A set of architectural design rules defining a language of design is incorporated into the GA fitness function. When possible genetic drift or local convergence might be occurring, the method starts an adaptive mutation rate to overcome fitness stagnation. The GA tool efficiently produces several layout solutions obeying the design rules and further placement constraints. Such a tool can be integrated into an appropriate user interface allowing future house owners to customize their own house or construction companies to answer client's' requirements while complying with a language of design.

Simple on-the-fly parameter selection mechanisms for two classical discrete black-box optimization benchmark problems

  • Carola Doerr
  • Markus Wagner

Despite significant empirical and theoretically supported evidence that non-static parameter choices can be strongly beneficial in evolutionary computation, the question how to best adjust parameter values plays only a marginal role in contemporary research on discrete black-box optimization. This has led to the unsatisfactory situation in which feedback-free parameter selection rules such as the cooling schedule of Simulated Annealing are predominant in state-of-the-art heuristics, while, at the same time, we understand very well that such time-dependent selection rules can not perform as well as adjustment rules that do take into account the evolution of the optimization process. A number of adaptive and self-adaptive parameter control strategies have been proposed in the literature, but did not (yet) make their way to a broader public. A key obstacle seems to lie in their rather complex update rules.

The purpose of our work is to demonstrate that high-performing online parameter selection rules do not have to be very complicated. More precisely, we experiment with a multiplicative, comparison-based update rule to adjust the mutation rate of a (1+1) Evolutionary Algorithm. We show that this simple self-adjusting rule outperforms the best static unary unbiased black-box algorithm on LeadingOnes, achieving an almost optimal speedup of about 18%. We also observe promising performance on the OneMax benchmark problem.

Towards a theory-guided benchmarking suite for discrete black-box optimization heuristics: profiling (1 + λ) EA variants on onemax and leadingones

  • Carola Doerr
  • Furong Ye
  • Sander van Rijn
  • Hao Wang
  • Thomas Bäck

Theoretical and empirical research on evolutionary computation methods complement each other by providing two fundamentally different approaches towards a better understanding of black-box optimization heuristics. In discrete optimization, both streams developed rather independently of each other, but we observe today an increasing interest in reconciling these two sub-branches. In continuous optimization, the COCO (Comparing Continuous Optimisers) benchmarking suite has established itself as an important platform that theoreticians and practitioners use to exchange research ideas and questions. No widely accepted equivalent exists in the research domain of discrete black-box optimization.

Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics. In this documentation we demonstrate how this test bed can be used to profile the performance of evolutionary algorithms. More concretely, we study the optimization behavior of several (1 + λ) EA variants on the two benchmark problems OneMax and LeadingOnes. This comparison motivates a refined analysis for the optimization time of the (1 + λ) EA on LeadingOnes.

Serendipitous scaffolding to improve a genetic algorithm's speed and quality

  • Heather J. Goldsby
  • Rebecca L. Young
  • Jory Schossau
  • Hans A. Hofmann
  • Arend Hintze

A central challenge to evolutionary computation is enabling techniques to evolve increasingly complex target end products. Frequently direct approaches that reward only the target end product itself are not successful because the path between the starting conditions and the target end product traverses through a complex fitness landscape, where the directly accessible intermediary states may be require deleterious or even simply neutral mutations. As such, a host of techniques have sprung up to support evolutionary computation techniques taking these paths. One technique is scaffolding where intermediary targets are used to provide a path from the starting state to the end state. While scaffolding can be successful within well-understood domains it also poses the challenge of identifying useful intermediaries. Within this paper we first identify some shortcomings of scaffolding approaches --- namely, that poorly selected intermediaries may in fact hurt the evolutionary computation's chance of producing the desired target end product. We then describe a light-weight approach to selecting intermediate scaffolding states that improve the efficacy of the evolutionary computation.

On the runtime dynamics of the compact genetic algorithm on jump functions

  • Václav Hasenöhrl
  • Andrew M. Sutton

Jump functions were originally introduced as benchmarks on which recombinant evolutionary algorithms can provably outperform those that use mutation alone. To optimize a jump function, an algorithm must be able to execute an initial hill-climbing phase, after which a point across a large gap must be generated. Standard GAs mix mutation and crossover to achieve both behaviors. It seems likely that other techniques, such as estimation of distribution algorithms (EDAs) may exhibit such behavior, but an analysis is so far missing.

We analyze an EDA called the compact Genetic Algorithm (cGA) on jump functions with gap k. We prove that the cGA initially exhibits a strong positive drift resulting in good hill climbing behavior. Interpreting the diversity of the process as the variance of the underlying probabilistic model, we show the existence of a critical point beyond which progress slows and diversity vanishes. If k is not too large, the cGA generates with high probability an optimal solution in polynomial time before losing diversity. For k = Ω(log n), this yields a superpolynomial speedup over mutation-only approaches. We show a small modification that creates λ > 2 offspring boosts the critical threshold and allows the cGA to solve functions with a larger gap within the same number of fitness evaluations.

Investigation of the exponential population scheme for genetic algorithms

  • Yuen-Jen Lin
  • Tian-Li Yu

Early development of GAs requires many parameters to be tuned. The tuning process increases the difficulty for inexperienced practitioners. Modern GAs have most of these parameters pre-determined, and therefore recent research concerning parameterless schemes has focused on population size. The techniques developed in this paper are mainly based on Harik and Lobo's work and the exponential population scheme (EPS), which double the population until the solution is satisfactory. In this paper, we modify EPS based on theoretical analyses. Specifically, we propose a new termination criterion and an optimized population multiplier. The experiment results show that our scheme reduces 33.4%, 19.1% and 29.6% number of function evaluations (NFE) on hBOA (the parameter-less hBOA), LT-GOMEA and DSMGA-II respectively when compared to Harik-Lobo scheme, and reduces 28.5%, 4.7% and 11.0% NFE on hBOA, LT-GOMEA and DSMGA-II respectively when compared to EPS. In addition, compared to EPS, our scheme empirically reduces the number of failures when using LT-GOMEA to solve the folded trap and MAX-SAT problems.

Learning an evolvable genotype-phenotype mapping

  • Matthew Andres Moreno
  • Wolfgang Banzhaf
  • Charles Ofria

We present AutoMap, a pair of methods for automatic generation of evolvable genotype-phenotype mappings. Both use an artificial neural network autoencoder trained on phenotypes harvested from fitness peaks as the basis for a genotype-phenotype mapping. In the first, the decoder segment of a bottlenecked autoencoder serves as the genotype-phenotype mapping. In the second, a denoising autoencoder serves as the genotype-phenotype mapping. Automatic generation of evolvable genotype-phenotype mappings are demonstrated on the n-legged table problem, a toy problem that defines a simple rugged fitness landscape, and the Scrabble string problem, a more complicated problem that serves as a rough model for linear genetic programming. For both problems, the automatically generated genotype-phenotype mappings are found to enhance evolvability.

Discrepancy-based evolutionary diversity optimization

  • Aneta Neumann
  • Wanru Gao
  • Carola Doerr
  • Frank Neumann
  • Markus Wagner

Diversity plays a crucial role in evolutionary computation. While diversity has been mainly used to prevent the population of an evolutionary algorithm from premature convergence, the use of evolutionary algorithms to obtain a diverse set of solutions has gained increasing attention in recent years. Diversity optimization in terms of features on the underlying problem allows to obtain a better understanding of possible solutions to the problem at hand and can be used for algorithm selection when dealing with combinatorial optimization problems such as the Traveling Salesperson Problem.

We consider discrepancy-based diversity optimization approaches for evolving diverse sets of images as well as instances of the Traveling Salesperson problem where a local search is not able to find near optimal solutions. Our experimental investigations comparing three diversity optimization approaches show that a discrepancy-based diversity optimization approach using a tie-breaking rule based on weighted differences to surrounding feature points provides the best results in terms of the star discrepancy measure.

GEESE: grammatical evolution algorithm for evolution of swarm behaviors

  • Aadesh Neupane
  • Michael A. Goodrich
  • Eric G. Mercer

Animals such as bees, ants, birds, fish, and others are able to perform complex coordinated tasks like foraging, nest-selection, flocking and escaping predators efficiently without centralized control or coordination. Conventionally, mimicking these behaviors with robots requires researchers to study actual behaviors, derive mathematical models, and implement these models as algorithms. We propose a distributed algorithm, Grammatical Evolution algorithm for Evolution of Swarm bEhaviors (GEESE), which uses genetic methods to generate collective behaviors for robot swarms. GEESE uses grammatical evolution to evolve a primitive set of human-provided rules into productive individual behaviors. The GEESE algorithm is evaluated in two different ways. First, GEESE is compared to state-of-the-art genetic algorithms on the canonical Santa Fe Trail problem. Results show that GEESE outperforms the state-of-the-art by (a) providing better solution quality given sufficient population size while (b) utilizing fewer evolutionary steps. Second, GEESE outperforms both a hand-coded and a Grammatical Evolution-generated solution on a collective swarm foraging task.

Learning bayesian network structures with GOMEA

  • Kalia Orphanou
  • Dirk Thierens
  • Peter A. N. Bosman

Bayesian networks (BNs) are probabilistic graphical models which are widely used for knowledge representation and decision making tasks, especially in the presence of uncertainty. Finding or learning the structure of BNs from data is an NP-hard problem. Evolutionary algorithms (EAs) have been extensively used to automate the learning process. In this paper, we consider the use of the Gene-Pool Optimal Mixing Evolutionary Algorithm (GOMEA). GOMEA is a relatively new type of EA that belongs to the class of model-based EAs. The model used in GOMEA is aimed at modeling the dependency structure between problem variables, so as to improve the efficiency and effectiveness of variation. This paper shows that the excellent performance of GOMEA transfers from well-known academic benchmark problems to the specific case of learning BNs from data due to its model-building capacities and the potential to compute partial evaluations when learning BNs. On commonly-used datasets of varying size, we find that GOMEA outperforms standard algorithms such as Order-based search (OBS), as well as other EAs, such as Genetic Algorithms (GAs) and Estimation of Distribution algorithms (EDAs), even when efficient local search techniques are added.