GECCO '18- Proceedings of the Genetic and Evolutionary Computation Conference
Full Citation in the ACM Digital LibrarySESSION: Ant colony optimization and swarm intelligence
Information sharing and conflict resolution in distributed factored evolutionary algorithms
Competition and cooperation are powerful metaphors that have informed improvements in multi-population algorithms such as the Cooperative Coevolutionary Genetic Algorithm, Cooperative Particle Swarm Optimization, and Factored Evolutionary Algorithms (FEA). However, we suggest a different perspective can give a finer grained understanding of how multi-population algorithms come together to avoid problems like hitchhiking and pseudo-minima. In this paper, we apply the concepts of information sharing and conflict resolution through Pareto improvements to analyze the distributed version of FEA (DFEA). As a result, we find the original DFEA failed to implement FEA with complete fidelity. We then revise DFEA and examine the differences between it and FEA and the new implications for relaxing consensus in the distributed algorithm.
Using ant colony optimization to optimize long short-term memory recurrent neural networks
This work examines the use of ant colony optimization (ACO) to improve long short-term memory (LSTM) recurrent neural networks (RNNs) by refining their cellular structure. The evolved networks were trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. Results were obtained using MPI (Message Passing Interface) on a high performance computing (HPC) cluster, which evolved 1000 different LSTM cell structures using 208 cores over 5 days. The new evolved LSTM cells showed an improvement in prediction accuracy of 1.37%, reducing the mean prediction error from 6.38% to 5.01% when predicting excessive engine vibrations 10 seconds in the future, while at the same time dramatically reducing the number of trainable weights from 21,170 to 11,650. The ACO optimized LSTM also performed significantly better than traditional Nonlinear Output Error (NOE), Nonlinear AutoRegression with eXogenous (NARX) inputs, and Nonlinear Box-Jenkins (NBJ) models, which only reached error rates of 11.45%, 8.47% and 9.77%, respectively. The ACO algorithm employed could be utilized to optimize LSTM RNNs for any time series data prediction task.
A model of artificial emotions for behavior-modulation and implicit coordination in multi-robot systems
We propose a model of artificial emotions for adaptation and implicit coordination in multi-robot systems. Artificial emotions play two roles, which resemble their function in animals and humans: modulators of individual behavior, and means of communication for social coordination. Emotions are modeled as compressed representations of the internal state, and are subject to a dynamics depending on internal and external conditions. Being a compressed representation, they can be efficiently exposed to nearby robots, allowing to achieve local group-level communication. The model is instantiated for a navigation task, with the aim of showing how coordination can effectively emerge by adding artificial emotions on top of an existing navigation framework. We show the positive effects of emotion-mediated group behaviors in a few challenging scenarios that would otherwise require ad hoc strategies: preventing deadlocks in crowded conditions; enabling efficient navigation of agents with time-critical tasks; assisting robots with faulty sensors. Two performance measures, throughput and number of collisions, are used to quantify the contribution of emotions for modulation and coordination.
Recurrent neural network-predictions for PSO in dynamic optimization
In order to improve particle swarm optimization (PSO) to tackle dynamic optimization problems, various strategies have been introduced, e. g., random restart, memory, and multi-swarm approaches. However, literature lacks approaches based on prediction. In this paper we propose three different PSO variants employing a prediction approach based on recurrent neural networks to adapt the swarm behavior after a change of the objective function. We compare the variants in an experimental study to a PSO algorithm that is solely based on re-randomization. The experimental study comprises the moving peaks benchmark and dynamic extensions of the Sphere, Rastrigin, and Rosenbrock functions for showing the strengths of the prediction-based PSO variants regarding convergence.
A particle swarm optimization based feature selection approach to transfer learning in classification
Transfer learning aims to use acquired knowledge from existing (source) domains to improve learning performance on a different but similar (target) domains. Feature-based transfer learning builds a common feature space, which can minimize differences between source and target domains. However, most existing feature-based approaches usually build a common feature space with certain assumptions about the differences between domains. The number of common features needs to be predefined. In this work, we propose a new feature-based transfer learning method using particle swarm optimization (PSO), where a new fitness function is developed to guide PSO to automatically select a number of original features and shift source and target domains to be closer. Classification performance is used in the proposed fitness function to maintain the discriminative ability of selected features in both domains. The use of classification accuracy leads to a minimum number of model assumptions. The proposed algorithm is compared with four state-of-the-art feature-based transfer learning approaches on three well-known real-world problems. The results show that the proposed algorithm is able to extract less than half of the original features with better performance than using all features and outperforms the four benchmark semi-supervised and unsupervised algorithms. This is the first time Evolutionary Computation, especially PSO, is utilized to achieve feature selection for transfer learning.
Semi-supervised learning assisted particle swarm optimization of computationally expensive problems
In many real-world optimization problems, it is very time-consuming to evaluate the performance of candidate solutions because the evaluations involve computationally intensive numerical simulations or costly physical experiments. Therefore, standard population based meta-heuristic search algorithms are not best suited for solving such expensive problems because they typically require a large number of performance evaluations. To address this issue, many surrogate-assisted meta-heuristic algorithms have been proposed and shown to be promising in achieving acceptable solutions with a small computation budget. While most research focuses on reducing the required number of expensive fitness evaluations, not much attention has been paid to take advantage of the large amount of unlabelled data, i.e., the solutions that have not been evaluated using the expensive fitness functions, generated during the optimization. This paper aims to make use of semi-supervised learning techniques that are able to enhance the training of surrogate models using the unlabelled data together with the labelled data in a surrogate-assisted particle swarm optimization algorithm. Empirical studies on five 30-dimensional benchmark problems show that the proposed algorithm is able to find high-quality solutions for computationally expensive problems on a limited computational budget.
A new foraging-based algorithm for online scheduling
While much work exists on scheduling, literature in the subfield of online scheduling remains sparse. As with many problems, online scheduling has parallels with natural phenomena. Specifically, online scheduling can be seen in the division of labour among colony insects, such as ants. Although multiple different biological models exist for division of labour, the only one to have been applied in online scheduling is the reinforced threshold model, for instance in the form of the ant task allocation (ATA) algorithm. However, it is neither known how it compares to other models, nor in which applications any of them excel. This paper studies the foraging for work (FFW) model as a possible alternative. To do so, an algorithmic description of the FFW model is introduced, and it is compared with the ATA algorithm on the truck painting problem. For this problem, tasks of various types are scheduled in a flowshop with multiple identical machines in an online fashion. FFW is shown to be very effective at minimising setup time, which is incurred when switching to tasks of different types. Furthermore, this allows FFW to outperform the threshold based approaches when the scheduling environment is placed under heavy load.