GECCO '18- Proceedings of the Genetic and Evolutionary Computation ConferenceFull Citation in the ACM Digital Library
SESSION: Evolutionary multiobjective optimization
We analyze the effects of including local search techniques into a multi-objective evolutionary algorithm for solving a bi-objective orienteering problem with a single vehicle while the two conflicting objectives are minimization of travel time and maximization of the number of visited customer locations. Experiments are based on a large set of specifically designed problem instances with different characteristics and it is shown that local search techniques focusing on one of the objectives only improve the performance of the evolutionary algorithm in terms of both objectives. The analysis also shows that local search techniques are capable of sending locally optimal solutions to foremost fronts of the multi-objective optimization process, and that these solutions then become the leading factors of the evolutionary process.
We introduce generalized offline orthant search, an algorithmic framework that can be used to solve many problems coming from evolutionary multiobjective optimization using a common well-optimized algorithmic core and relatively cheap reduction procedures. The complexity of the core procedure is O(n · (log n)k-1) for n points of dimension k, and it has a good performance in practice.
We show that the presented approach can perform various flavors of non-dominated sorting, dominance counting, evaluate the ε-indicator and perform initial fitness assignment for IBEA. It is either competitive with the state-of-the-art algorithms or completely outperforms them for higher problem sizes. We hope that this approach will simplify future development of efficient algorithms for evolutionary multiobjective optimization.
This paper introduces a differential prediction model to predict the varying Pareto-Optimal Solutions (POS) when solving dynamic multiobjective optimization problems (DMOPs). In dynamic multiobjective optimization problems, several competing objective functions and/or constraints change over time. As a consequence, the Pareto-Optimal Solutions and/or Pareto-Optimal Front may vary over time. The differential prediction model is used to forecast the shift vector in the decision space of the centroid in the population through the centroid's historical locations in three previous environments. This differential prediction model is incorporated into a multiobjective evolutionary algorithm based on decomposition to solve DMOPs. After detecting the environmental change, half of individuals in the population are forecasted their new positions in the decision space by using the differential prediction model and the others' positions are retained. The proposed model is tested on a number of typical benchmark problems with several dynamic characteristics. Experimental results show that the proposed model is competitively in comparisons with the other state-of-the-art models or approaches that were proposed for solving DMOPs.
We consider multiobjective optimization problems where objective functions have different (or heterogeneous) evaluation times or latencies. This is of great relevance for (computationally) expensive multiobjective optimization as there is no reason to assume that all objective functions should take an equal amount of time to be evaluated (particularly when objectives are evaluated separately). To cope with such problems, we propose a variation of the Kriging-assisted reference vector guided evolutionary algorithm (K-RVEA) called heterogeneous K-RVEA (short HK-RVEA). This algorithm is a merger of two main concepts designed to account for different latencies: A single-objective evolutionary algorithm for selecting training data to train surrogates and K-RVEA's approach for updating the surrogates. HK-RVEA is validated on a set of biobjective benchmark problems varying in terms of latencies and correlations between the objectives. The results are also compared to those obtained by previously proposed strategies for such problems, which were embedded in a non-surrogate-assisted evolutionary algorithm. Our experimental study shows that, under certain conditions, such as short latencies between the two objectives, HK-RVEA can outperform the existing strategies as well as an optimizer operating in an environment without latencies.
The working principles of the well-established multi-objective evolutionary algorithm M
Multidisciplinary design optimization problems with competing objectives that involve several interacting components can be called complex systems. Nowadays, it is common to partition the optimization problem of a complex system into smaller subsystems, each with a subproblem, in part because it is too difficult to deal with the problem all-at-once. Such an approach is suitable for large organisations where each subsystem can have its own (specialised) design team. However, this requires a design process that facilitates collaboration, and decision making, in an environment where teams may exchange limited information about their own designs, and also where the design teams work at different rates, have different time schedules, and are normally not co-located. A multi-objective optimization methodology to address these features is described. Subsystems exchange information about their own optimal solutions on a peer-to-peer basis, and the methodology enables convergence to a set of optimal solutions that satisfy the overall system. This is demonstrated on an example problem where the methodology is shown to perform as well as the ideal, but "unrealistic" approach, that treats the optimization problem all-at-once.
In recent years, Indicator-based Multi-Objective Evolutionary Algorithms (IB-MOEAs) have become a relatively popular alternative for solving multi-objective optimization problems. IB-MOEAs are normally based on the use of a single performance indicator. However, the effect of the combination of multiple performance indicators for selecting solutions is a topic that has rarely been explored. In this paper, we propose a hyper-heuristic which combines the strengths and compensates for the weaknesses of four density estimators based on R2, IGD+, ϵ+ and Δp. The selection of the indicator to be used at a particular moment during the search is done using online learning and a Markov chain. Additionally, we propose a novel framework that aims to reduce the computational cost involved in the calculation of the indicator contributions. Our experimental results indicate that our proposed approach can outperform state-of-the-art MOEAs based on decomposition (MOEA/D) reference points (NSGA-III) and the R2 indicator (R2-EMOA) for problems with both few and many objectives.
When working with decomposition-based algorithms, an appropriate set of weights might improve quality of the final solution. A set of uniformly distributed weights usually leads to well-distributed solutions on a Pareto front. However, there are two main difficulties with this approach. Firstly, it may fail depending on the problem geometry. Secondly, the population size becomes not flexible as the number of objectives increases. In this paper, we propose the MOEA/D with Uniformly Randomly Adaptive Weights (MOEA/D-URAW) which uses the Uniformly Randomly method as an approach to subproblems generation, allowing a flexible population size even when working with many objective problems. During the evolutionary process, MOEA/D-URAW adds and removes subproblems as a function of the sparsity level of the population. Moreover, instead of requiring assumptions about the Pareto front shape, our method adapts its weights to the shape of the problem during the evolutionary process. Experimental results using WFG41-48 problem classes, with different Pareto front shapes, shows that the present method presents better or equal results in 77.5% of the problems evaluated from 2 to 6 objectives when compared with state-of-the-art methods in the literature.
In multiobjective optimization, many techniques are used to visualize the results, ranging from traditional general-purpose data visualization techniques to approaches tailored to the specificities of multiobjective optimization. The number of specialized approaches rapidly grows in the recent years. To assist both the users and developers in this field, we propose a taxonomy of methods for visualizing Pareto front approximations. It builds on the nature of the visualized data and the properties of visualization methods rather than on the employed visual representations. It covers the methods for visualizing individual approximation sets resulting from a single algorithm run as well as multiple approximation sets produced in repeated runs. The proposed taxonomy categories are characterized and illustrated with selected examples of visualization methods. We expect that proposed taxonomy will be insightful to the multiobjective optimization community, make the communication among the participants easier and help focus further development of visualization methods.
When and why can evolutionary multi-objective optimization (EMO) algorithms cover the entire Pareto set? That is a major concern for EMO researchers and practitioners. A recent theoretical study revealed that (roughly speaking) if the Pareto set forms a topological simplex (a curved line, a curved triangle, a curved tetrahedron, etc.), then decomposition-based EMO algorithms can cover the entire Pareto set. Usually, we cannot know the true Pareto set and have to estimate its topology by using the population of EMO algorithms during or after the runtime. This paper presents a data-driven approach to analyze the topology of the Pareto set. We give a theory of how to recognize the topology of the Pareto set from data and implement an algorithm to judge whether the true Pareto set may form a topological simplex or not. Numerical experiments show that the proposed method correctly recognizes the topology of high-dimensional Pareto sets within reasonable population size.
A promising idea for evolutionary constrained optimization is to efficiently utilize not only feasible solutions (feasible individuals) but also infeasible ones. In this paper, we propose a simple implementation of this idea in MOEA/D. In the proposed method, MOEA/D has two grids of weight vectors. One is used for maintaining the main population as in the standard MOEA/D. In the main population, feasible solutions always have higher fitness than infeasible ones. Among infeasible solutions, solutions with smaller constraint violations have higher fitness. The other grid is for maintaining a secondary population where non-dominated solutions with respect to scalarizing function values and constraint violations are stored. More specifically, a single non-dominated solution with respect to the scalarizing function and the total constraint violation is stored for each weight vector. A new solution is generated from a pair of neighboring solutions in the two grids. That is, there exist three possible combinations of two parents: both from the main population, both from the secondary population, and each from each population. The proposed MOEA/D variant is compared with the standard MOEA/D and other evolutionary algorithms for constrained multiobjective optimization through computational experiments.
Multiobjective optimisation in dynamic environments is challenging due to the presence of dynamics in the problems in question. Whilst much progress has been made in benchmarks and algorithm design for dynamic multiobjective optimisation, there is a lack of work on the detectability of environmental changes and how this affects the performance of evolutionary algorithms. This is not intentionally left blank but due to the unavailability of suitable test cases to study. To bridge the gap, this work presents several scenarios where environmental changes are less likely to be detected. Our experimental studies suggest that the less detectable environments pose a big challenge to evolutionary algorithms.
In sparse hyperspectral unmixing, regularization methods inevitably suffer from the "decision ahead of solution" issue concerning the regularization parameter, which is not conducive to practical applications. To settle this issue, a two-phase multiobjective sparse unmixing (Tp-MoSU) approach has been proposed recently. However, Tp-MoSU has limited performance on high noise data and uses little spatial-contextual information in estimating abundances. To address the first problem, a tri-objective optimization model is established for each of the two phases to model mixed additive noise automatically. To address the second problem, a dual spatial exploiting objective is specially designed in the second phase to exploit similarity among adjacent pixels, which can improve the quality of estimated abundances. In addition, the memetic based evolutionary algorithms are elaborately modified for each of the two phases for better convergence. The experimental results on several representative data sets demonstrate that the proposed method performs better than Tp-MoSU in both of the two phases and completely better than some advanced regularization algorithms in abundance estimation under mixed additive noise.
Component-level study of a decomposition-based multi-objective optimizer on a limited evaluation budget
Decomposition-based algorithms have emerged as one of the most popular classes of solvers for multi-objective optimization. Despite their popularity, a lack of guidance exists for how to configure such algorithms for real-world problems, based on the features or contexts of those problems. One context that is important for many real-world problems is that function evaluations are expensive, and so algorithms need to be able to provide adequate convergence on a limited budget (e.g. 500 evaluations). This study contributes to emerging guidance on algorithm configuration by investigating how the convergence of the popular decomposition-based optimizer MOEA/D, over a limited budget, is affected by choice of component-level configuration. Two main aspects are considered: (1) impact of sharing information; (2) impact of normalisation scheme. The empirical test framework includes detailed trajectory analysis, as well as more conventional performance indicator analysis, to help identify and explain the behaviour of the optimizer. Use of neighbours in generating new solutions is found to be highly disruptive for searching on a small budget, leading to better convergence in some areas but far worse convergence in others. The findings also emphasise the challenge and importance of using an appropriate normalisation scheme.
Efficient search techniques using adaptive discretization of design variables on real-coded evolutionary computations
In this paper, we evaluate the effects of adaptive discretization of design variables in real-coded evolutionary computations (RCECs). While the appropriate granularity of design variables can improve convergence in RCECs, it is difficult to decide the appropriate one in advance in most of the practical optimization problems. Besides, when the granularity is too coarse, the diversity may be lost. To address these difficulties, we propose two adaptive discretization techniques that discretize each design variable using granularity determined according to the indicator of solution distribution state in design space. In this study, standard deviation(SD) or estimated probability density function(ePDF) is used as an indicator for determining granularities of design variables. We use NSGA-II as an RCEC and thirteen benchmark problems including engineering problems. The generational distance (GD) and inverted generational distance (IGD) metrics are used for investigating the performance of convergence and diversity, respectively. To make sure the statistical difference of results, the Wilcoxon rank-sum test and Welch's t-test are applied in each problem. The results of experiments show that both of the proposed methods can automatically improve convergence in many problems. In addition, it is confirmed that the diversity is also maintained.
Improving the performance of MO-RV-GOMEA on problems with many objectives using tchebycheff scalarizations
The Multi-Objective Real-Valued Gene-pool Optimal Mixing Evolutionary Algorithm (MO-RV-GOMEA) has been shown to exhibit excellent performance in solving various bi-objective benchmark and real-world problems. We assess the competence of MO-RV-GOMEA in tackling many-objective problems, which are normally defined as problems with at least four conflicting objectives. Most Pareto dominance-based Multi-Objective Evolutionary Algorithms (MOEAs) typically diminish in performance if the number of objectives is more than three because selection pressure toward the Pareto-optimal front is lost. This is potentially less of an issue for MO-RV-GOMEA because its variation operator creates each offspring solution by iteratively altering a currently existing solution in a few decision variables each time, and changes are only accepted if they result in a Pareto improvement. For most MOEAs, integrating scalarization methods is potentially beneficial in the many-objective context. Here, we investigate the possibility of improving the performance of MO-RV-GOMEA by further guiding improvement checks during solution variation in MO-RV-GOMEA with carefully constructed Tchebycheff scalarizations. Results obtained from experiments performed on a selection of well-known problems from the DTLZ and WFG test suites show that MO-RV-GOMEA is by design already well-suited for many-objective problems. Moreover, by enhancing it with Tchebycheff scalarizations, it outperforms M0EA/D-2TCHMFI, a state-of-the-art decomposition-based MOEA.
In recent years, the design of new selection mechanisms has become a popular trend in the development of Multi-Objective Evolutionary Algorithms (MOEAs). This trend has been motivated by the aim of maintaining a good balance between convergence and diversity of the solutions. Reference-based selection is, with no doubt, one of the most promising schemes in this area. However, reference-based MOEAs are known to have difficulties for solving multi-objective problems with complicated Pareto fronts, mainly because they rely on the consistency between the Pareto front shape and the distribution of the reference weight vectors. In this paper, we propose a reference-based MOEA, which uses the Inverted Generational Distance plus (IGD+) indicator. The proposed approach adopts a novel method for approximating the reference set, based on an hypercube-based method. Our results indicate that our proposed approach is able to obtain solutions of a similar quality to those obtained by RVEA, MOEA/DD, NSGA-III and MOMBI-II in several test problems traditionally adopted in the specialized literature, and is able to outperform them in problems with complicated Pareto fronts.
This work proposes a decomposition-based algorithm, CMOEA/D-DMA, for constrained many-objective optimization. For constrained multi-objective optimization, the TNSDM algorithm using the directed mating selecting infeasible solutions having better objective values than feasible ones as parents was proposed and verified its effectiveness on several test problems. However, since TNSDM uses the non-dominated sorting, its search performance deteriorates when the number of objectives is increased. For many-objective optimization, the decomposing objective space is a promising approach, and MOEA/D is known as its representative algorithm. However, since the conventional MOEA/D maintains only one solution for each weight vector, feasible solutions are preferred rather than infeasible ones. Infeasible solutions are just discarded even if they have better objective values than maintained feasible solutions and can provide clues for the search. For solving constrained many-objective optimization problems, in this work, we propose CMOEA/D-DMA combining MOEA/D with the Directed Mating and Archives of infeasible solutions. The directed mating in CMOEA/D-DMA selects useful infeasible solutions having better scalarizing function values than feasible ones as parents and maintains them in archives. The experimental results using continuous mCDTLZ and discrete knapsack problems with many-objectives show that the proposed CMOEA/D-DMA achieves higher search performance than the conventional TNSDM, MOEA/D, and NSGA-III.
In large-scale optimisation, most algorithms require a separation of the variables into multiple smaller groups and aim to optimise these variable groups independently. In single-objective optimisation, a variety of methods aim to identify best variable groups, most recently the Differential Grouping 2. However, it is not trivial to apply these methods to multiple objectives, as the variable interactions might differ between objective functions. In this work, we introduce four different transfer strategies that allow to use any single-objective grouping mechanisms directly on multi-objective problems. We apply these strategies to a popular single-objective grouping method (Differential Grouping 2) and compare the performance of the obtained groups inside three recent large-scale multi-objective algorithms (MOEA/DVA, LMEA, WOF). The results show that the performance of the original MOEA/DVA and LMEA can in some cases be improved by our proposed grouping variants or even random groups. At the same time the computational budget is dramatically reduced. In the WOF algorithm, a significant improvement in performance compared to random groups or the standard version of the algorithm can on average not be observed.
mQAPViz: a divide-and-conquer multi-objective optimization algorithm to compute large data visualizations
Algorithms for data visualizations are essential tools for transforming data into useful narratives. Unfortunately, very few visualization algorithms can handle the large datasets of many real-world scenarios. In this study, we address the visualization of these datasets as a Multi-Objective Optimization Problem. We propose
In this paper, a new R2 indicator is proposed for better hypervolume approximation. First the fact that the original R2 indicator is not a good approximation for the hypervolume is illustrated by examples. Then the new R2 indicator is derived based on the Divergence theorem and Riemann sum approximation. The difference between the original R2 and the new R2 is only the added exponential in the new R2 where the exponential is the same as the dimensionality of the objective space (i.e., the number of objectives). The new R2, the original R2 and some other R2 variants are compared through comprehensive numerical studies on different solution sets under different scenarios. The results show the superiority of the proposed new R2 indicator over other R2 variants for the hypervolume approximation, where the new R2 indicator achieves the best linear relation with the true hypervolume.
Pareto Local Search (PLS) is a simple, yet effective optimization approach dedicated to multi-objective combinatorial optimization. It can however suffer from a high computational cost, especially when the size of the Pareto optimal set is relatively large. Recently, incorporating decomposition in PLS had revealed a high potential, not only in providing high-quality approximation sets, but also in speeding-up the search process. Using the bi-objective Unconstrained Binary Quadratic Programming (bUBQP) problem as an illustrative benchmark, we demonstrate some shortcomings in the resulting decomposition-guided Parallel Pareto Local Search (PPLS), and we propose to revisit the PPLS design accordingly. For instances with a priori unknown Pareto front shape, we show that a simple pre-processing technique to estimate the scale of the Pareto front can help PPLS to better balance the workload. Furthermore, we propose a simple technique to deal with the critically-important scalability issue raised by PPLS when deployed over a large number of computing nodes. Our investigations show that the revisited version of PPLS provides a consistent performance, suggesting that decomposition-guided PPLS can be further generalized in order to improve both parallel efficiency and approximation quality.
For optimisation problems with multiple objectives and large search spaces, it may not be feasible to find all optimal solutions. Even if possible, a decision maker (DM) is only interested in a small number of these solutions. Incorporating a DM's solution preferences into the process reduces the problem's search space by focusing only on regions of interest. Allowing a DM to interact and alter their preferences during a single optimisation run facilitates learning and mistake correction, and improves the search for desired solutions. In this paper, we apply an interactive framework to four leading multi-objective evolutionary algorithms (MOEAs), which use reference points to model preferences. Furthermore, we propose a new performance metric for algorithm responsiveness to preference changes, and evaluate these algorithms using this metric. Interactive algorithms must respond to changes in DM preferences and we show how our new metric is able to differentiate between the four algorithms when run on the ZDT suite of test problems. Finally, we identify characteristics of these methods that determine their level of response to change.
The problem of 3-Dimensional en-route airspace sectorization has been modelled as a multi-objective optimization by taking into account of conflicting air traffic controller workloads. In practice, air traffic controllers impose limits on these objectives, which may not be captured completely in Pareto front obtained using the multi-objective model. Hence, in this paper, we propose a preference-based multi-objective optimization model for 3-dimensional en-route sectorization. Through the use of reference point(s), the proposed model is able to find multiple solutions that satisfy the air traffic controller preference(s) faster. Population-based NSGA-II has been used to solve the preference-based 3-dimensional en-route sectorization problem. The performance of preference based sectorization is evaluated using actual flight data from the Singapore regional airspace. The results are compared with conventional multi-objective optimization model which integrates the preference as a constraint. Results indicate that the preference-based model generally performs better than the constraint-based model. Further, multiple parallel runs of preference-based optimization could provide a greater variety of choices in airspace sectorization for the air traffic controllers.
This research proposes a novel indicator-based hybrid evolutionary approach that combines approximate and exact algorithms. We apply it to a new bi-criteria formulation of the travelling thief problem, which is known to the Evolutionary Computation community as a benchmark multi-component optimisation problem that interconnects two classical NP-hard problems: the travelling salesman problem and the 0-1 knapsack problem. Our approach employs the exact dynamic programming algorithm for the underlying packing while travelling problem as a subroutine within a bi-objective evolutionary algorithm. This design takes advantage of the data extracted from Pareto fronts generated by the dynamic program to achieve better solutions. Furthermore, we develop a number of novel indicators and selection mechanisms to strengthen synergy of the two algorithmic components of our approach. The results of computational experiments show that the approach is capable to outperform the state-of-the-art results for the single-objective case of the problem.
Multicast routing for wavelength division multiplexing optical network is a challenging problem in optical networks. The Multicast Routing and Wavelength Assignment (MRWA) problem has been proved to be a NP complete problem. In this paper, the MRWA problem in Wavelength Division Multiplexing (WDM) has been modeled as a Multi-Objective Multicast Routing and Wavelength Assignment (MOMRWA) optimization problem with three optimization objectives, including minimizing the resource cost, minimizing the end-to-end delay and minimizing the used channels. Some Quality of Service metrics have also been considered in MOMRWA. To tackle the MOMRWA problem, a multi-objective genetic algorithm, namely MRWA_SSNSGA-II, has been proposed, which is based on the Steady-State Non-Dominated Sorting Genetic Algorithm II combined with the Efficient Non-Domination Level Update (ENLU) strategy. The purpose of the ENLU strategy is to reduce the computing time of maintaining the Non-Domination Level structure. In order to avoid the loop generated by the gene recombination, a light tree recovery mechanism based on the Minimum Path Cost Heuristic has been proposed. We evaluate our proposed MRWA_SSNSGA-II on a large number of test instances with different sizes. Experimental results demonstrate that MRWA_SSNSGA-II outperformed some recent GA based multi-objective optimization algorithms for MOMRWA problem in terms of the solution quality.
Two enhancements for improving the convergence speed of a robust multi-objective coevolutionary algorithm
We describe two enhancements that significantly improve the rapid convergence behavior of DECM02 - a previously proposed robust coevolutionary algorithm that integrates three different multi-objective space exploration paradigms: differential evolution, two-tier Pareto-based selection for survival and decomposition-based evolutionary guidance. The first enhancement is a refined active search adaptation mechanism that relies on run-time sub-population performance indicators to estimate the convergence stage and dynamically adjust and steer certain parts of the coevolutionary process in order to improve its overall efficiency. The second enhancement consists in a directional intensification operator that is applied in the early part of the run during the decomposition-based search phases. This operator creates new random local linear individuals based on the recent historically successful solution candidates of a given directional decomposition vector. As the two efficiency-related enhancements are complementary, our results show that the resulting coevolutionary algorithm is a highly competitive improvement of the baseline strategy when considering a comprehensive test set aggregated from 25 (standard) benchmark multi-objective optimization problems.