Loading...
 

TOC GECCO 2021 ACO

GECCO '21: Proceedings of the Genetic and Evolutionary Computation Conference

GECCO '21: Proceedings of the Genetic and Evolutionary Computation Conference

Full Citation in the ACM Digital Library

SESSION: Ant colony optimization and swarm intelligence

A rigorous runtime analysis of the 2-MMASib on jump functions: ant colony optimizers can cope well with local optima

  • Riade Benbaki
  • Ziyad Benomar
  • Benjamin Doerr

Ant colony optimizers have been successfully used as general-purpose optimization heuristics. Due to the complicated nature of the random processes that describe the runs of ACO algorithms, the mathematical understanding of these algorithms is much less developed than that of other nature-inspired heuristics. In this first runtime analysis of a basic ACO algorithm on a classic multimodal benchmark, we analyze the runtime of the 2-MMASib on jump functions. For moderate jump sizes k ≤ α0 ln n, α0 > 0 a constant, we prove a runtime of order [EQUATION], when the evaporation factor ρ satisfies ρCn-1/2 ln(n)-1 for a sufficiently small constant C. For ρ = Θ(n-1/2 ln(n)-1), we thus obtain a runtime of O(n ln(n)). This result shows that simple ACO algorithms can cope much better with local optima than many evolutionary algorithms, which need Ω(nk) time.

Ants can solve the parallel drone scheduling traveling salesman problem

  • Quoc Trung Dinh
  • Duc Dong Do
  • Minh Hoàng Hà

In this work, we are interested in studying the parallel drone scheduling traveling salesman problem (PDSTSP), where deliveries are split between a truck and a fleet of drones. The truck performs a common delivery tour, while the drones are forced to perform back and forth trips between customers and a depot. The objective is to minimize the completion time coming back to the depot of all the vehicles. We present a hybrid ant colony optimization (HACO) metaheuristic to solve the problem. Our algorithm is based on an idea from the literature that represents a PDSTSP solution as a permutation of all customers. And then a dynamic programming is used to decompose the customer sequence into a tour for the truck and trips for the drones. We propose a new dynamic programming combined with other problem-tailored components to efficiently solve the problem. When being tested on benchmark instances from the literature, the HACO algorithm outperforms state-of-the-art algorithms in terms of both running time and solution quality. More remarkably, we find 23 new best known solutions out of 90 instances considered.

The paradox of choice in evolving swarms: information overload leads to limited sensing

  • Calum C. Imrie
  • J. Michael Herrmann
  • Olaf Witkowski

The paradox of choice refers to the observation that numerous choices can have a detrimental effect on the quality of decision making. We study this effect in swarms in the context of a resource foraging task. We simulate the evolution of swarms gathering energy from a number of energy sources distributed in a two-dimensional environment. As the number of sources increases, the evolution of the swarm results in reduced levels of efficiency, despite the sources covering more space. Our results indicate that this effect arises because the simultaneous detection of multiple sources is incompatible with an evolutionary scheme that favours greedy energy consumption. In particular, the communication among the agents tends to reduce their efficiency by precluding the evolution of a clear preference for an increasing number of options. The overabundance of explicit information in the swarm about fitness-related options cannot be exploited by the agents lacking complex planning capabilities. If the sensor ranges evolve in addition to the behaviour of the agents, a preference for reduced choice results, and the average range approaches zero as the number of sources increases. Our study thus presents a minimal model for the paradox of choice, which implies several options of experimental verification.

Stasis type particle stability in a stochastic model of particle swarm optimization

  • Tomasz Kulpa
  • Krzysztof Trojanowski
  • Krzysztof Wójcik

Particle Swarm Optimization (PSO) is a heuristic optimization technique where particles explore optima of the fitness landscape defined in the search space. It is a practical approach but vulnerable to incorrect particle movement parameter tuning. Therefore, stochastic models of a particle and a swarm are subject to analysis. In this paper, we study the stability properties of particles controlled by a universal update equation proposed by Cleghorn and Engelbrecht. We propose a new concept of particle state, namely the stasis state when the difference between two consecutive particle location variance values is not changing much after a sufficient time. We also propose a convergence time measure, representing a minimal number of steps a particle location variance needs to reach stasis. Finally, we visualize the convergence time characteristics obtained in simulations.

Fishing for interactions: a network science approach to modeling fish school search

  • Mariana Macedo
  • Lydia Taw
  • Nishant Gurrapadi
  • Rodrigo C. Lira
  • Diego Pinheiro
  • Marcos Oliveira
  • Carmelo Bastos-Filho
  • Ronaldo Menezes

Computational swarm intelligence has been demonstrably shown to efficiently solve high-dimensional optimization problems due to its flexibility, robustness, and (low) computational cost. Despite these features, swarm-based algorithms are black boxes whose dynamics may be hard to understand. In this paper, we delve into the Fish School Search (FSS) algorithm by looking at how fish interact within the fish school. We find that the network emerging from these interactions is structurally invariant to the optimization of three benchmark functions: Rastrigin, Rosenbrock and Schwefel. However, at the same time, our results also reveal that the level of social interactions among the fish depends on the problem. We show that the absence of highly-influential fish leads to a slow-paced convergence in FSS and that the changes in the intensity of social interactions enable good performance on both unimodal and multimodal problems. Finally, we examine two other swarm-based algorithms---the Artificial Bee Colony (ABC) and Particle Swarm Optimization (PSO) algorithms---and find that for the same three benchmark functions, the structural invariance characteristic only occurs in the FSS algorithm. We argue that FSS, ABC, and PSO have distinctive signatures of interaction structure and flow.

A bio-inspired spatial defence strategy for collective decision making in self-organized swarms

  • Judhi Prasetyo
  • Giulia De Masi
  • Raina Zakir
  • Muhanad Alkilabi
  • Elio Tuci
  • Eliseo Ferrante

In collective decision-making, individuals in a swarm reach consensus on a decision using only local interactions without any centralized control. In the context of the best-of-n problem - characterized by n discrete alternatives - it has been shown that consensus to the best option can be reached if individuals disseminate that option more than the other options. Besides being used as a mechanism to modulate positive feedback, long dissemination times could potentially also be used in an adversarial way, whereby adversarial swarms could infiltrate the system and propagate bad decisions using aggressive dissemination strategies. Motivated by the above scenario, in this paper we propose a bio-inspired defence strategy that allows the swarm to be resilient against options that can be disseminated for longer times. This strategy mainly consists in reducing the mobility of the agents that are associated to options disseminated for a shorter amount of time, allowing the swarm to converge to this option. We study the effectiveness of this strategy using two classical decision mechanisms, the voter model and the majority rule, showing that the majority rule is necessary in our setting for this strategy to work. The strategy has also been validated on a real Kilobots experiment.

A hybrid ant colony optimization algorithm for the knapsack problem with a single continuous variable

  • Xinhua Yang
  • Yufan Zhou
  • Ailing Shen
  • Juan Lin
  • Yiwen Zhong

Knapsack problem with a single continuous variable (KPC) is a natural extension of the standard 0-1 knapsack problem. In the KPC, the capacity of the knapsack is not fixed, so it becomes more difficult to solve. Although several metaheuristics have been proposed to solve the KPC, their performances are not stable enough. This paper proposes a hybrid ant colony optimization (HACO) algorithm to solve the KPC. In the HACO algorithm, a simplified ACO algorithm is designed to search the optimal subset of items, and the mutation operator of differential evolution algorithm is used to search the optimal value of continuous variable. The parameter of the mutation operator is adaptively configured via an ACO algorithm. In the simplified ACO algorithm, an independent selection strategy and a pheromone-based selection strategy are designed to speed up the algorithm and simplify the tuning of parameters respectively. Personal best solutions constructed by each ant are used to deposit pheromone. Furthermore, a value-based greedy optimization strategy is introduced to enhance the solutions. The HACO algorithm is experimentally analyzed on 40 KPC instances. The experimental results show that the HACO algorithm is better than other competitors in terms of obtaining the best solution and the mean solution.