GECCO '18- Proceedings of the Genetic and Evolutionary Computation Conference CompanionFull Citation in the ACM Digital Library
POSTER SESSION: Poster: evolutionary numerical optimization
Differential Evolution (DE) is a robust optimization algorithm, but it suffers from the stagnation problem in which individuals may not escape from a local optimum. In this article, we proposed a new Cauchy mutation using multiple exponential recombination, self-adaptive parameter control, and linear failure threshold reduction. The proposed method is a simple yet efficient to mitigate the stagnation problem, and it can improve the convergence speed of DE. Our experimental results show that the proposed method can find more accurate solutions to complicated problems.
Benchmarks are important for comparing performance of optimisation algorithms, but we can select instances that present our algorithm favourably, and dismiss those on which our algorithm under-performs. Also related are automated design of algorithms, which use problem instances (benchmarks) to train an algorithm: careful choice of instances is needed for the algorithm to generalise.
We sweep parameter settings of differential evolution to applied to the BBOB benchmarks. Several benchmark functions are highly correlated. This may lead to the false conclusion that an algorithm performs well in general, when it performs poorly on a few key instances. These correlations vary with the number of evaluations.
Exploratory landscape analysis techniques are widely used methods for the algorithm selection problem. The existing sampling methods for exploratory landscape analysis are usually designed to sample unbiased candidates for measuring the characteristics of the entire search space. In this paper, we discuss the limitation of the unbiased sampling and propose a novel sampling method, which is algorithm based and thus biased. Based on the sampling method, we propose several novel landscape features which are called algorithm based landscape features. The proposed features are compared with the conventional landscape features using supervised and unsupervised learning. The experimental results show that the algorithm based landscape features outperform the conventional landscape features.
Dynamic constrained multi-objective evolutionary algorithms with a novel selection strategy for constrained optimization
The recently proposed dynamic constrained multi-objective evolutionary algorithm (DCMOEA) is effective to handle constrained optimization problems (COPs). However, one drawback of DCMOEA is it mainly searches the global optimum from infeasible regions, which may result in the bias against feasible solutions. Then the useful information about the optimal direction of feasible regions is not fully used. To overcome this defect, this paper proposes a novel selection strategy based on DCMOEA framework, called NSDCMOEA to solve COPs. The performance of NSDCMOEA is evaluated using a set of benchmark suites. Experimental results validate that the proposed method is better than or very competitive to five state-of-the-art algorithms.
Multifactorial 1 Optimization (MFO) has been attracting considerable attention in the evolutionary computation community. In this paper, we propose a general multi-population evolution framework (MPEF) for MFO, wherein each population has its own random mating probability (rmp) and is used for its own task. The benefits of using MPEF are twofold: 1) Various well-developed evolutionary algorithms (EAs) can be easily embedded into MPEF for solving the task(s) of MFO problems; 2) Different populations can implement different genetic material transfers. Moreover, for instantiation, we embed a powerful differential evolution algorithm, namely SHADE, into MPEF to form a multipopulation DE algorithm (MPEF-SHADE) for solving MFO problems. The experimental results on nine MFO benchmark problems show that MPEF-SHADE is significantly better than or at least competitive with other multifactorial evolution algorithms, such as MFEA, MFDE, MFPSO and AMA.
Traditional Gaussian estimation of distribution algorithm (EDA) may suffer from premature convergence and has a high risk of falling into local optimum when dealing with multimodal problem. In this paper, we first attempt to improve the performance of EDA by utilizing historical solutions and develop a novel archive-based EDA variant. The use of historical solutions not only enhances the search efficiency of EDA to a large extent, but also significantly reduces the population size so that a faster convergence could be achieved. Then, the archive-based EDA is further integrated with an novel adaptive clustering strategy for solving multimodal optimization problems. Taking the advantage of the clustering strategy in locating different promising areas and the powerful exploitation ability of the archive-based EDA, the resultant algorithm is endowed with strong capability in finding multiple optima. To verify the efficiency of the proposed algorithm, we tested it on a set of niching benchmark problems, the experimental results indicate that the proposed algorithm is competitive.
In this work, we give some numerical investigations of the CMAES-APOP on some multi-modal functions and introduce a relaxed version of this algorithm to cope with the hard Schwefel function. The numerical simulations will show the efficiency of our approach.
Enhancing cooperative coevolution for large scale optimization by adaptively constructing surrogate models
It has been shown that cooperative coevolution (CC) can effectively deal with large scale optimization problems (LSOPs) through a divide-and-conquer strategy. However, its performance is severely restricted by the current context-vector-based sub-solution evaluation method because this method requires too many computation resources. To alleviate this issue, this study proposes an adaptive surrogate model assisted CC framework which adaptively constructs surrogate models for different sub-problems by fully considering their characteristics. By this means, the computation cost could be greatly reduced without significantly sacrificing evaluation quality. Empirical studies on IEEE CEC 2010 large scale benchmark suit show that the concrete algorithm based on this framework performs well.
Extension of weighted empirical distribution and group-based adaptive differential evolution for joint chance constrained problems
There are two types of Chance Constrained Problems (CCPs), namely Separate CCP (SCCP) and Joint CCP (JCCP). This paper extends the optimization method for solving SCCP and applies it to JCCP. By using Bonferroni inequality, JCCP is stated with the Cumulative Distribution Function (CDF) of uncertain function value. Then Weighted Empirical CDF (W_ECDF) is used to approximate the CDF in JCCP. A new Adaptive Differential Evolution (ADE) combined with W_ECDF is also contrived for solving both CCPs.
Recently the use of Radial Basis Functions (RBF) has been introduced as an optional alternative to co-Kriging in the context of multi-fidelity surrogate modeling. In this paper, we compare the performance of Random Forest-based co-surrogates to the previously introduced co-Kriging and co-RBF using a set of bi-fidelity benchmark problems in 2, 4 and 8 dimensions. Our results show that there is a minimal overall difference between the different co-surrogate models with regards to final performace, although the training of Random Forests takes much less time compared to the Kriging and RBF methods.
In this short paper, we reveal the issue of the covariance matrix adaptation evolution strategy when solving a function with periodic variables. We investigate the effect of a simple modification that the coordinate-wise standard deviation of the sampling distribution is restricts to the one-fourth of the period length. This is achieved by pre- and post-multiplying a diagonal matrix to the covariance matrix.
In contrast to the traditional single-task evolutionary algorithms, multi-factorial evolutionary algorithm (MFEA) has been proposed recently to conduct evolutionary search on multiple tasks simultaneously. It aims to improve convergence characteristics of the tasks to be tackled by seamlessly transferring knowledge among them. Towards superior multitasking performance, the evaluation of task relationship plays an important role for grouping the related tasks, and solve them at the same time. However, in the literature, only a little work has been conducted to provide deeper insights in the measure of task relationship in MFEA. In this paper, we thus present a study of similarity measure between tasks for MFEA from three different perspectives. 21 multitasking problem sets are developed to investigate and analyze the effectiveness of the three similarity measures with MFEA for evolutionary multitasking.
POSTER SESSION: Poster: genetic algorithms
How to disseminate information or ideas through social network connection has received much attention in recent years. The core issue is to find a seed set of initially active individuals that can maximize the influence spread. In this paper, we present a comparative study on three basic algorithms for such issue. Experimental results show that although genetic algorithm can find slightly better solution than other algorithms, it is too time-consuming to be cost-effective. Hence, our on-going work is aimed at improving the search efficiency of different bio-inspired meta-heuristic methods.
In this paper we introduce KafkEO, a cloud native evolutionary algorithms framework that is prepared to work with population-based metaheuristics by using micro-populations and stateless services as the main building blocks; KafkEO is an attempt to map the traditional evolutionary algorithm to this new cloud-native format.
Using genetic algorithms based on neighbor list mechanism to reduce handover latency for IEEE 802.11 WLAN
In IEEE 802.11 Wireless Local Area Network (WLAN), the mobile stations (STAs) have to perform handover to keep network connections when they move out of the range of an access point (AP). However, the STAs collect the information of surrounding APs by channel scanning, which will cause high handover latency and degrade the quality of mobility. Therefore, minimizing the scanning delay is key to enabling seamless communications over WLAN. In this paper, we propose a genetic algorithm (GA) algorithm combined with neighbor list mechanism to reduce handover latency. Using this proposed approach, the handover latency is minimized by reducing both the number of scanned channels and the waiting time for each channel. Simulation results demonstrate that our approach improves throughput up to 9.5% and reduces handover delay up to 74% compared to standard IEEE 802.11.
This paper aims to show a glimpse of the potential benefits of using genetic algorithms (GA's) in EEG data analysis. The system attempts machine-learning based classification of a mental state reached during competitive gameplay and an idle state. EEG activity from a large number of experienced players of League of Legends has been recorded during both idle periods and during a competitive, ranked game. A proven, industry-tested GA is used in order to optimize aspects of the brain-computer interface (BCI) system, by evolving both electrode and feature sets. Alongside a rather high cross validation base classification accuracy, statistically significant improvements have been obtained by applying the GA in selecting both electrode subset and features. The run time improvements are evident as well, although even further improvements are possible. This not only allows major improvements in both processing power and data acquisition requirements, but it is a step further in the attempt of bringing both BCI technology closer to consumer products, and cognitive state identification as a means of improving gaming experience, as well as other aspects of everyday life.
A deeper understanding in how the power consumption of evolutionary algorithms behaves is necessary to keep meeting high quality results without wasting energy resources. This paper presents a black-box model for predicting the energy consumption of the NSGA-II-based Parallel Islands approach to Multiobjective Feature Selection (pi-MOFS). We analyzed the power usage of each stage in pi-MOFS when applied to a brain-computer interface classification task. Fitness evaluation showed as the most relevant stage for the case study presented in time and power consumption. The results showed a 98.81% prediction accuracy for the eight experiments designed. We believe that our findings and methodology can be used to apply pi-MOFS, NSGA-II and other EAs to current optimization problems from an energy-aware perspective.
The influence of fitness caching on modern evolutionary methods and fair computation load measurement
Any evolutionary method may store the fitness values for the genotypes it has already rated. Any time the fitness is to be computed, the check may be made if the fitness for the same genotype was not computed earlier. If so, then instead of re-evaluating the same genotype, the stored value from the repository may be returned. Such technique will be denoted as fitness caching. It is easy to implement in any evolutionary method, and it minimizes the number of fitness function evaluations (FFE), which is desirable. Despite its simplicity fitness caching may significantly affect the computation load spent on fitness computation. Moreover, it may cause that the FFE will not be a reliable computation load measure.
Given the advances in areas such as home automation, agricultural networks, smart cities, designers often need to design protocols that utilize the features of that network while dealing with its limitations. Utilizing standardized protocols for such networks may not be appropriate as they may not address limitations of the network such as heterogeneous nodes or limited capability of some nodes. While designing a customized protocol for that network would be desirable, it is extremely time-consuming unless we can automate the development of the required protocol. In this paper, we present NetSynth, a GP based mechanism to develop customized protocols for the given network. NetSynth lets us conveniently describe a network using an XML file, and it synthesizes protocols that suit the input network by considering the characteristics specific to the given network.
POSTER SESSION: Poster: general evolutionary computation and hybrids
We propose a heterogeneous island model where each of the islands can run a different optimization algorithm. The distributed computation is managed by a central planner, that re-plans the methods during the run of the algorithm - less successful methods are removed while new instances of more successful methods are added. We show that this re-planning improves the performance of the optimization algorithm.
An evolutionary algorithm with a new operator and an adaptive strategy for large-scale portfolio problems
A portfolio optimization problem involves optimal allocation of finite capital to a series of assets to achieve an acceptable trade-off between profit and risk in a given investment period. In the paper, the extended Markowitz's mean-variance portfolio optimization model is studied with some practical constraints. We introduce a new operator and an adaptive strategy for improving the performance of the multi-dimensional mapping algorithm (MDM) proposed specially for the portfolio optimization. Experimental results show that the modification is efficient on tackling large-scale portfolio problems.
We propose a framework for estimating the quality of solutions in a robust optimisation setting by utilising samples from the search history and using MC sampling to approximate a Voronoi tessellation. This is used to determine a new point in the disturbance neighbourhood of a given solution such that - along with the relevant archived points - they form a well-spread distribution, and is also used to weight the archive points to mitigate any selection bias in the neighbourhood history. Our method performs comparably well with existing frameworks when implemented inside a CMA-ES on 9 test problems collected from the literature in 2 and 10 dimensions.
A hybrid differential evolution and estimation of distribution algorithm for the multi-point dynamic aggregation problem
The multi-point dynamic aggregation (MPDA) is a typical task planning problem. In order to solve the MPDA problem efficiently a hybrid differential evolution (DE) and estimation of distribution algorithm (EDA) called DE-EDA is proposed in this paper, which combines the merits of DE and EDA. The DE-EDA has been applied to multiple MPDA instances of different scales, and compared with EDA and two versions of DE in convergence speed and solution quality separately. The results demonstrate the DE-EDA can solve the MPDA problem effectively.
Crowding distance based promising solution selection in surrogate assisted asynchronous multi-objective evolutionary algorithm
This paper proposes an efficient solution selection method in a surrogate-assisted asynchronous multi-objective evolutionary algorithm. Our previous research proposed a novel multi-objective evolutionary algorithm that integrates a surrogate evaluation model with asynchronous approach, named as AELMOEA/D. AELMOEA/D constructs a surrogate model with extreme learning machine (ELM) and generates a promising solution by MOEA/D with a constructed ELM model. A generated promising solution is selected in the order of the indexes of the weighted vector of MOEA/D, and is evaluated asynchronously. In contrast to the previous method, the proposed method considers degree of search progress of each weight vector and selects a promising solution in a region where the search progress is insufficient. To evaluate the degree of the search progress, this study employs crowding distance, which is basically used in NSGA-II. To investigate the effectiveness of the proposed method, we conduct the experiment on a multi-objective optimization benchmark problem. The experimental result revealed that the proposed method can accelerate the convergence speed of the optimization without deteriorating the performance compared with the previous method.
Complex systems' modeling and simulation are powerful ways to investigate a multitude of natural phenomena providing extended knowledge on their structure and behavior. However, enhanced modeling and simulation require integration of various data and knowledge sources, models of various kinds, intelligent components in one composite solution. Growing complexity of such composite model leads to the need of specific approaches for management of such model. To support automatic building and management of complex models we propose a general evolutionary computation approach based on an evolutionary investigation of model phase space to identify the best model's structure and parameters.
In this paper, we propose two new approaches to rank the frequently used empirical cumulative distribution functions (ECDFs) for performance assessment of stochastic optimization algorithms. In the first approach, the different orders of stochastic dominance among running lengths are adopted in a hierarchical manner: the first order stochastic dominance is tested and the second order is used when the first order leads to incomparable results. In the second approach, ECDFs are considered as local Pareto front of the bi-criteria decision-making problem, in which one objective is to achieve a high success rate and the other is to use as few function evaluations as possible. In this case, it is proposed to adopt the multi-objective performance indicator to handle incomparable ECDFs.
In evolutionary algorithms, a preselection operator aims to choose some promising offspring solutions for a further environmental selection. Most existing preselection operators are based on fitness values, surrogate models, or classification models. Since a preselection operation can be regarded as a classification procedure, the classification based preselection is a natural choice for evolutionary algorithms. However, it is not trivial to prepare 'positive' and 'negative' training samples by using binary and/or multi-class classification models in preselection. To deal with this problem, this paper proposes a one-class classification based preselection (OCPS) scheme, which only needs one class of 'positive' training samples. The proposed OCPS scheme is applied to two state-of-the-art evolutionary algorithms on a test suite. The experimental results show the potential of OCPS on improving the performance of some existing evolutionary algorithms.
In this paper, we proposed a novel differential evolution (DE) variant with multi-information guidance. First, based on a rank-based method, the DE population is divided into three groups by using both of the fitness information and position information. Then three distinct combinations of mutation strategy and parameter settings are assigned to these three groups respectively. Last, a neighborhood search operator is conducted with the aim of using the neighborhood information. Experimental results on 22 well-known benchmark functions have shown the effectiveness of our approach.
POSTER SESSION: Poster: genetic programming
This paper investigates the possibility of evolving new particle swarm equations representing a collective search mechanism, acting in environments with unknown external dynamics, using Geometric Semantic Genetic Programming (GSGP). The proposed method uses a novel initialization technique - the Evolutionary Demes Despeciation Algorithm (EDDA)- which allows to generate solutions of smaller size than using the traditional ramped half-and-half algorithm. We show that EDDA, using a mixture of both GP and GSGP mutation operators, allows us to evolve new search mechanisms with good generalization ability.
Classification of resting-state fMRI for olfactory dysfunction in parkinson's disease using evolutionary algorithms
Accurate early diagnosis and monitoring of neurodegenerative conditions is essential for effective disease management and treatment. This research develops automatic methods for detecting brain imaging preclinical biomarkers for olfactory dysfunction in early stage Parkinson's disease (PD) by considering the novel application of evolutionary algorithms. Classification will be applied to PD patients with severe hyposmia, patients with no/mild hyposmia, and healthy controls. An additional novel element is the use of evolutionary algorithms to map and predict the functional connectivity using rs-fMRI. Cartesian Genetic Programming (CGP) will be used to classify dynamic causal modelling (DCM) data as well as timeseries data. The findings will be validated using two other commonly used classification methods (ANN and SVM) and by employing k-fold cross-validation. Developing methods for identifying early stage PD patients with hyposmia is relevant since current methods of diagnosing early stage PD have low reliability and accuracy. Furthermore, exploring the performance of CGP relative to other methods is crucial given the additional benefits it provides regarding easy classifier decoding. Hence, this research underscores the potential relevance of DCM analyses for classification and CGP as a novel classification tool for brain imaging data with medical implications for disease diagnosis, particularly in early stages.
Multi-population genetic programming with adaptively weighted building blocks for symbolic regression
Genetic programming(GP) is a powerful tool to solve Symbolic Regression that requires finding mathematic formula to fit the given observed data. However, existing GPs construct solutions based on building blocks (i.e., the terminal and function set) defined by users in an ad-hoc manner. The search efficacy of GP could be degraded significantly when the size of the building blocks increases. To solve the above problem, this paper proposes a multi-population GP framework with adaptively weighted building blocks. The key idea is to divide the whole population into multiple sub-populations with building blocks with different weights. During the evolution, the weights of building blocks in the sub-populations are adaptively adjusted so that important building blocks can have larger weights and higher selection probabilities to construct solutions. The proposed framework is tested on a set of benchmark problems, and the experimental results have demonstrated the efficacy of the proposed method.
Term-Weighting Scheme (TWS) is an important step in text classification. It determines how documents are represented in Vector Space Model (VSM). Even though state-of-the-art TWSs exhibit good behaviors, a large number of new works propose new approaches and new TWSs that improve performances. Furthermore, it is still difficult to tell which TWS is well suited for a specific problem. In this paper, we are interested in automatically generating new TWSs with the help of evolutionary algorithms and especially genetic programming (GP). GP evolves and combines different statistical information and generates a new TWS based on the performance of the learning method. We experience the generated TWSs on three well-known benchmarks. Our study shows that even early generated formulas are quite competitive with the state-of-the-art TWSs and even in some cases outperform them.
We explore the application of GOMEA, a recent method for discovering and exploiting the model for a problem in the form of linkage, to Grammatical Evolution (GE). GE employs an indirect representation based on familiar bit-string genotypes and is applicable to any problem where the solutions may be described using a context-free grammar, which hence greatly favors its wide adoption. Being general purpose, the representation of GE raises the opportunity for benefiting from the potential of GOMEA to automatically discover and exploit the linkage. We analyze experimentally the application of GOMEA to two bit-string-based variants of GE representation (the original representation and the recent WHGE) and show that GOMEA is clearly beneficial when coupled to WHGE, whereas it delivers no significant advantages when coupled with GE.
Supervised learning by means of Genetic Programming aims at the evolutionary synthesis of a model that achieves a balance between approximating the target function on the training data and generalising on new data. In this study we benchmark the approximation / generalisation of models evolved using different function set setups, across a range of symbolic regression problems. Results show that Koza's protected division and power should be avoided, and operators such as analytic quotient and sine should be used instead.
Analyzing effects of various trust in product recalls using a social simulation with a co-evolution model
To improve product recall systems, we studied social simulation using a multi-agent system with a co-evolution model. This research is important because empirical approaches are no longer adequate for complex and diverse modern societies. Results of a simulation experiment have revealed the possibility that improving consumer trust in product recall actions is useful for producers and makes it possible to sell expensive products. We believe this work can contribute to support of government staff for improving product recall systems and to support of executive officers of product companies deliberating about recall decision strategies.
POSTER SESSION: Poster: real world applications
Performance assessment of a modified multi-objective cuckoo's search algorithm for microgrid planning considering uncertainties
The implementation of the microgrid concept is expected to bring multiple advantages on the sustainability and reliability of electric power systems. In this paper, the performance of a modified Cuckoo's Search (CS) algorithm with parameter control is evaluated for solving a planning optimization problem which considers uncertainties of non dispatchable energy resources and both operations modes of a microgrid. The results show that the Cuckoo's Search algorithm offers advantages in terms of exploration of the search space.
A sentiment analysis-based machine learning approach for financial market prediction via news disclosures
Stock market prediction plays an important role in financial decision-making for investors. Many of them rely on news disclosures to make their decisions in buying or selling stocks. However, accurate modelling of stock market trends via news disclosures is a challenging task, considering the complexity and ambiguity of natural languages used. Unlike previous work along this line of research, which typically applies bag-of-words to extract tens of thousands of features to build a prediction model, we propose a sentiment analysis-based approach for financial market prediction using news disclosures. Specifically, sentiment analysis is carried out in the pre-processing phase to extract sentiment-related features from financial news. Historical stock market data from the perspective of time series analysis is also included as an input feature. With the extracted features, we use a support vector machine (SVM) to build the prediction model, with its parameters optimised through particle swarm optimisation (PSO). Experimental results show that our proposed SVM and PSO-based model is able to obtain better results than a deep learning model in terms of time and accuracy. The results presented here are to date the best in the literature based on the financial news dataset tested. This excellent performance is attributed to the sentiment analysis done during the pre-processing stage, as it reduces the feature dimensions significantly.
We developed a Voronoi-based algorithm, called Bio-Inspired Self Organizing Network (BISON), designed to provide a successful deployment of wireless sensor network (WSN) following fast, cost-efficient and self-organizing process, autonomously adapting to the unknown topology of the target environment, and avoiding obstacles discovered in real-time. To limit the power consumed during the deployment, BISON restricts each node to use only locally sensed information to adapt to live-discovered topology while avoiding obstacles and connecting with neighboring nodes. The algorithm is evaluated with respect to several metrics, and simulation results showed faster convergence to a fully connected network with lower deployment costs compared to similar algorithms reported in the literature.
An optimization study of screw position and number of screws for the fixation stability of a distal femoral locking compression plate using genetic algorithms
A distal locking compression plate (DLCP) has been used to treat distal femoral fracture. An DLCP with a large number of screws can improve fixation stability, but the use of a small number of screws can reduce the damage on soft tissue and bone. The purpose of this study was to determine the best screw position and number of DLCP screws for distal femoral fracture fixation. Three-dimensional finite element models of the spine-pelvis-femur complex were developed to evaluate the fixation stability. The best screw position and number of DLCP screws were determined using a simulation-based genetic algorithm. The results showed that the DLCP with eight screws had acceptable fixation stability. The best screw position of the DLCP was four DLCP screws on either side of the bone fragment with three DLCP screws as close as practicable to the fracture site.
Super-resolution reconstruction (SRR) allows for enhancing image spatial resolution from low-resolution (LR) observations, which are assumed to have been derived from a hypothetical high-resolution image by applying a certain imaging model (IM). However, if the actual degradation is different from the assumed IM, which is often the case in real-world scenarios, then the reconstruction quality is affected. We introduce a genetic algorithm to optimize the SRR hyper-parameters and to discover the actual IM by evolving the kernels exploited in the IM. The reported experimental results indicate that our approach outperforms the state of the art for a variety of images, including difficult real-life satellite data.
Process mining aims at discovering the workflow of a process from the event logs that provide insights into organizational processes for improving these processes and their support systems. Ideally a process mining algorithm should produce a model that is simple, precise, general and fits the available logs. A conventional process mining algorithm typically generates a single process model that may not describe the recorded behavior effectively. Recently, Pareto multi-objective evolutionary algorithms have been used to generate several competing process models from the event logs. Subsequently, a user can choose a model based on his/her preference. In this paper, we have used three second-generation MOEA techniques, namely, PAES, SPEA-II, and NSGA-II, for generating a set of non-dominated process models. Using the BPI datasets, we demonstrate the efficacy of NSGA-II with respect to solution quality over its competitor algorithms.
Optimizing traffic lights in road intersections is a mandatory step to achieve sustainable mobility and efficient public transportation in modern cities. Several mono or multi-objective optimization methods exist to find the best traffic signals settings, such as evolutionary algorithms, fuzzy logic algorithms, or even particle swarm optimizations. However, they are generally dedicated to very specific traffic configurations. In this paper, we introduce the SIALAC benchmark bringing together about 24 real-world based study cases, and investigate fitness landscapes structure of these problem instances.
In most of applications of Wireless Sensor Networks(WSNs), it is expected that a solution finding a number of non-disjoint cover sets will save more energy. Base on sleep scheduling method, this paper proposed a modified genetic algorithm to maximize the lifetime of networks and schedule the sensors in the sets successively. Results show that under the same conditions, the proposed algorithm can always find the sets with maximum lifetime and consume less computation time while comparing with the recently published algorithms.
Optimal control of greenhouse environments can be improved by using a combined microclimate-crop-yield model to allow selection of greenhouse designs and control algorithms to maximize the profit margin. However, classical methods for optimal greenhouse control are not adequate to establish the tradeoffs between multiple objectives. We use NSGA-II to evolve the setpoints for microclimate control in a greenhouse simulation and define two objectives: minimizing variable costs and maximizing the value of the tomato crop yield. Results show that the evolved setpoints can provide the grower a variety of better solutions, resulting in greater profitability compared to prior simulated results. The Pareto front also provides additional information to the grower, showing the economic tradeoffs between variable costs and tomato crop yield, which can aid in decision making.
As a promising approach to the design of energy efficient circuits, approximate circuits and approximate circuit design methodologies have attracted a significant attention of researchers as well as industry. Compared to the traditional design methods, it has been demonstrated that evolutionary approaches are able to discover approximate circuits exhibiting a good trade-off between the energy consumption and circuit quality. In this work, evolutionary design of large approximate adders is addressed. In order to improve scalability, the quality of the candidate solutions is analyzed using a formal approach based on Binary Decision Diagrams. Compared to the common approach based on a parallel circuit simulator, the proposed method is able to evaluate 2-3 orders of magnitude more generations.
For air-conditioning systems in office buildings, it is crucial to reduce the power consumption while maintaining office workers' thermal comfort. To generate practically feasible temperature setting schedules of air-conditioning systems, we propose an evolutionary multi-objective air-conditioning schedule optimization system for office buildings. In the proposed system, the target building is modeled and simulated by EnergyPlus building simulator which used in the building construction field, and the OMOPSO optimizer is employed to optimize temperature setting schedules. Experimental results show that the proposed system can obtain temperature schedules showing an optimal trade-off between the thermal comfort and the power consumption.
Towards a small diverse pareto-optimal solutions set generator for multiobjective optimization problems
Multiobjective evolutionary algorithms (MOEAs) try to produce enough and sufficiently diverse Pareto-optimal tradeoff solutions to cover the entire Pareto surface. However, in practical scenarios, presenting numerous solutions to stakeholders may result in confusion and indecision. This paper proposes a method for generating a small (user-specified) number of well-distributed Pareto-optimal feasible solutions for multiobjective problems. The proposed method can be applied to a set of aggregate solutions produced by (1) one MOEA over multiple runs, (2) several different MOEAs, or (3) a universal set of feasible solutions produced by one or more constraint solvers.
Using coevolutionary algorithms to find solutions to problems is a powerful technique but once solutions are identified it can be difficult for a decision maker to select a solution to deploy. ESTABLO runs competitive coevolutionary algorithm variants independently, in parallel, and then combines their test and solution results at the final generation into a compendium. From there, it re-evaluates each solution, according to three different measurements, on every test as well as on a set of unseen tests. For a decision maker, it finally identifies top solutions using various metrics and visualizes them in the context of other solutions. We demonstrate ESTABLO on a cyber security related resource allocation problem.
Multifactorial disorders are diseases whose underlying causes may be due to a combination of factors, including genetics, environmental influences, and lifestyle. Deconvoluting the factors responsible for such issues can be tremendously challenging. However, evolutionary algorithms (EAs) are particularly well suited to address this class of problem. EAs may be used to generate populations of solutions to carry out ensemble-based analyses of mathematical models that can determine which factors may have the largest impact on a particular disease state. In this work, a commonly utilized mathematical model for bone physiology was used to explore the underlying mechanism responsible for secondary hyperparathyroidism due to renal failure. Using a genetic algorithm (GA), a population of parameter sets for the bone model were generated based on a clinical data set. The average result of the best five GA-based models was compared to the standard bone model prediction as reported in the literature. The GA-based models manipulated several parameters simultaneously while the standard literature model manipulated only the glomerular filtration rate (GFR) parameter. The GA model resulted in a superior fit of the clinical data. It further indicated that six of the 108 parameters were significantly changed from what would be expected of a healthy individual. The GFR parameter was among the six significantly changed parameters. The remaining significantly changed parameters involved phosphate imbalances. Interestingly, hyperphosphatemia has been recognized as one of the primary causes of secondary hyperparathyroidism. These results would appear to corroborate the effectiveness of the GA in better understanding multifactorial disorders.
This paper proposes a total optimization method of a smart city (SC) by Global-best brain storm optimization (GBSO). The SC model includes natural gas utilities, electric power utilities, drinking and waste water treatment plants, industries, buildings, residences, and railroads. The proposed method minimizes energy cost, shifts actual electric power loads, and minimizes C02 emission using the model. Particle Swarm Optimization (PSO), Differential Evolution (DE), and Differential evolutionary particle swarm optimization (DEEPSO) have been applied to the optimization problem. However, there is room for improving solution quality. The proposed GBSO based method is applied to a model which considers a moderately-sized city in Japan, such as Toyama city. The proposed method is compared with the conventional DEEPSO and BSO based methods with promising results.
This contribution presents numerical results for optimizing a many-objective space mission trajectory benchmark under consideration of massively parallelized co-evaluation of solution candidates. The considered benchmark is the well-known Cassini1 instance published by the European Space Agency (ESA) extended to four objectives. The MIDACO optimization software represents an evolutionary algorithm based on Ant Colony Optimization (ACO) and is applied to solve this benchmark with a varying fine-grained parallelization factor (P) ranging from one to 1024. It can be shown that the required number of sequential steps to solve this benchmark can be significantly reduced by applying massive parallelization, while still maintaining a sufficient high number of well distributed non-dominated solutions in the objective space.
Nowadays, city streets are populated not only by private vehicles but also by public transport, distribution of goods, and deliveries. Since each vehicle class has a maximum cargo capacity, we study in this article how authorities could improve the road traffic by changing the different vehicle proportions: sedans, minivans, full-size vans, trucks, and motorbikes, without losing the ability of moving cargo throughout the city. We have performed our study in a realistic scenario and defined a multi-objective optimization problem to be solved, so as to minimize these city metrics. Our results provide a scientific evidence that we can improve the delivery of goods in the city by reducing the number of heavy duty vehicles and fostering the use of full-size vans instead.
Dynamical models are a mathematical framework for understanding the spread of a disease using various epidemiological parameters. However, in data-scarce regions like the Philippines, local estimates of epidemiological parameters are difficult to obtain because methods to obtain these values are costly or inaccessible. In this paper, we employ genetic algorithms trained with novel fitness functions as a low-cost, data-driven method to estimate parameters for dengue incidence in the Western Visayas Region of the Philippines (2011-2016). Initial results show good ht between monthly historical values and model outputs using parameter estimates, with a best Pearson correlation of 0.86 and normalized error of 0.65 over the selected 72-month period. Furthermore, we demonstrate a quality assessment procedure for selecting biologically feasible and numerically stable parameter estimates. Implications of our findings are discussed in both epidemiological and computational contexts, highlighting their application in FASSSTER, an integrated syndromic surveillance system for infectious diseases in the Philippines.
A novel genetic algorithm for lifetime maximization of wireless sensor networks with adjustable sensing range
Most existing algorithms for optimizing the lifetime of wireless sensor networks (WSNs) are developed assuming that the sensing ranges of sensors are fixed. This paper1 focuses on adjustable WSNs and proposes a lifetime maximization approach, in which the active periods and sensing ranges of sensors are scheduled simultaneously subject to the constraints of target coverage and power limit. First, the lifetime maximization problem is converted to a problem of finding a set of coverage patterns that can lead to the best schedule when fed into a linear programming model. A genetic algorithm is then developed for coverage pattern finding. With each individual representing a coverage pattern, evolutionary operators and repair strategy are tailored to evolve the pattern set efficiently. Experimental results in a variety conditions show that the proposed approach is advantageous in both terms of computational time and solution quality.
POSTER SESSION: Poster: search-based software engineering
We present a novel approach for discovering and suggesting classes/objects in legacy/procedural code, based on a genetic algorithm. Initially, a (procedures-accessing-variables) matrix is extracted from the code and converted into a square matrix. This matrix highlights the variable-relationships between procedures and is used as input to a genetic algorithm. The output of the genetic algorithm is then visually encoded using a heat-map. The developers can then (1) either manually identify objects in the presented heat-map or (2) use an automated detection algorithm that suggests objects. We compare our results with previous work.
Performance bugs are common and can cause a significant deterioration in the behaviour of a program, leading to costly issues. To detect them and reduce their impact, performance tests are typically applied. However, there is a lack of mechanisms to evaluate the quality of performance tests, causing many of these bugs remain unrevealed. Mutation testing, a fault-based technique to assess and improve test suites, has been successfully studied with functional tests. In this paper, we propose the use of mutation testing together with a search-based strategy (evolutionary algorithm) to find mutants that simulate performance issues. This novel approach contributes to enhance the confidence on performance tests while reducing the cost of mutation testing.
For a typical crowdsourcing process, a task publisher first publishes a task with an acceptable budget. Then hundreds of crowdsourced workers apply for the task with their desired bids. To recruit an adequate Crowdsourced Virtual Team (CVT) while balancing the profits of the task publisher and crowdsourced workers, previous studies proposed various algorithms, including Genetic Algorithm (GA), Alternating Variable Method (AVM), etc. However, the performance is still limited. In this study, we propose a novel hybrid metaheuristic algorithm CVTMaker to help publishers identify ideal CVTs. CVTMaker is effective which combines (1+1) Evolutionary Strategy ((1+1)-ES) and AVM to search solutions. Experimental results show that CVTMaker significantly outperforms GA and AVM over 3,117 and 5,642 of the 6,000 instances respectively.
Search Based Software Testing (SBST) formulates testing as an optimization problem, hence some search algorithms can be used to tackle it. The goal of SBST is to improve various test adequacy criteria. There are different types of coverage criteria, and in this paper, we deal with branch coverage, as it is the mostly used criterion. However, the state of the art fitness function definitions used for branch coverage testing have changed little. To fill this gap, this paper proposes a novel fitness function for branch coverage. Concretely, we first use a negative exponential model to evaluate the hardness of covering essential branches of the program, based on which we approximately evaluate the distance of a test candidate to the target branch. Finally, the experiment reveals the promising results of our proposal.
POSTER SESSION: Poster: theory
In their GECCO'12 paper, Doerr and Doerr proved that the k-ary unbiased black-box complexity of O
For each block of size 2k-1 - 1 we set up, in O(k) queries, a virtual coordinate system, which enables us to use an arbitrary unrestricted algorithm to optimize this block. This is possible because this coordinate system introduces a bijection between unrestricted queries and a subset of k-ary unbiased operators. We note that this technique does not depend on O
This together constitutes an algorithm which is conceptually simpler than the one by Doerr and Doerr, and in the same time achieves better constant multiples in the asymptotic notation. Our algorithm works in (2 + o(1)) · n/(k - 1), where o(1) relates to k. Our experimental evaluation of this algorithm shows its efficiency already for 3 ≤ k ≤ 6.
The statistical assessment of the empirical comparison of algorithms is an essential step in heuristic optimization. Classically, researchers have relied on the use of statistical tests. However, recently, concerns about their use have arisen and, in many fields, other (Bayesian) alternatives are being considered. For a proper analysis, different aspects should be considered. In this work we focus on the question: what is the probability of a given algorithm being the best? To tackle this question, we propose a Bayesian analysis based on the Plackett-Luce model over rankings that allows several algorithms to be considered at the same time.
A parameterized runtime analysis of randomized local search and evolutionary algorithm for max l-uncut
In the last few years, parameterized complexity has emerged as a new tool to analyze the running time of randomized local search algorithm. However, such analysis are few and far between. In this paper, we do a parameterized running time analysis of a randomized local search algorithm and a (1 + 1) EA for a classical graph partitioning problem, namely, M