GECCO '21: Proceedings of the Genetic and Evolutionary Computation ConferenceFull Citation in the ACM Digital Library
SESSION: Evolutionary multiobjective optimization
Pareto compliance is a critical property of quality indicators (QIs) focused on measuring convergence to the Pareto front. This property allows a QI not to contradict the order imposed by the Pareto dominance relation. Hence, Pareto-compliant QIs are remarkable when comparing approximation sets of multi-objective evolutionary algorithms (MOEAs) since they do not produce misleading conclusions. However, the practical effect of Pareto-compliant QIs as the backbone of MOEAs' survival mechanisms is not entirely understood. In this paper, we study the practical effect of IGD++ (which is a combination of IGD+ and the hypervolume indicator), IGD+, and IGD, which are Pareto-compliant, weakly Pareto-compliant, and not Pareto-compliant QIs, respectively. To this aim, we analyze the convergence and diversity properties of steady-state MOEAs based on the previously mentioned QIs throughout the whole evolutionary process. Our experimental results showed that, in general, the practical effect of a Pareto-compliant QI in a selection mechanism is not very significant concerning weaker QIs, taking into account the whole evolutionary process.
Normalization is an important algorithmic component for multiobjective evolutionary algorithms (MOEAs). Different normalization methods have been proposed in the literature. Recently, several studies have been conducted to examine the effects of normalization methods. However, the existing evaluation methods for investigating the effects of normalization are limited due to their drawbacks. In this paper, we discuss the limitations of the existing evaluation methods. A new metric has been proposed to facilitate the investigation of normalization methods. Our analysis clearly shows the superiority of the proposed metric over the existing methods. We also use the proposed metric to compare three popular normalization methods on problems with different Pareto front shapes.
We propose a new algorithm for the extreme hypervolume contributor/contribution problem, i.e. the problem of finding the point with minimum/maximum contribution to the hypervolume and (optionally) the value of this extreme contribution. Our algorithm is motivated by the Improved Quick Hypervolume (IQHV) which works in a divide and conquer manner, and uses lower and upper bounds of the contribution to speed up calculations. It may be applied directly in multiobjective optimization methods that select solutions based on the extreme contribution to the hypervolume, or within the greedy algorithms for hypervolume subset selection problem often used in EMO. The proposed algorithm is the first dedicated algorithm for the maximum contribution problem and in the reported computational experiment it outperforms on most data sets state-of-the-art IWFG algorithm for the minimum contribution problem. We show that the proposed algorithm uses the lower and upper bounds very well by comparing it to a reference algorithm having an access to an oracle providing the best contributor in advance. We propose also a modification of the general IQHV algorithm scheme in which the order of objectives processing which influences construction of sub-problems is heuristically adapted and show that this modification improves the efficiency of the algorithm.
Landscape features and automated algorithm selection for multi-objective interpolated continuous optimisation problems
In this paper, we demonstrate the application of features from landscape analysis, initially proposed for multi-objective combinatorial optimisation, to a benchmark set of 1 200 randomly-generated multiobjective interpolated continuous optimisation problems (MO-ICOPs). We also explore the benefits of evaluating the considered landscape features on the basis of a fixed-size sampling of the search space. This allows fine control over cost when aiming for an efficient application of feature-based automated performance prediction and algorithm selection. While previous work shows that the parameters used to generate MO-ICOPs are able to discriminate the convergence behaviour of four state-of-the-art multi-objective evolutionary algorithms, our experiments reveal that the proposed (black-box) landscape features used as predictors deliver a similar accuracy when combined with a classification model. In addition, we analyse the relative importance of each feature for performance prediction and algorithm selection.
The hypervolume indicator is widely used by multi-objective optimization algorithms and for assessing their performance. We investigate a set of p vectors in the biobjective space that maximizes the hypervolume indicator with respect to some reference point, referred to as p-optimal distribution. We prove explicit lower and upper bounds on the gap between the hypervolumes of the p-optimal distribution and the p-optimal distribution (the Pareto front) as a function of p, of the reference point, and of some Lipschitz constants. On a wide class of functions, this optimality gap can not be smaller than p(1/p), thereby establishing a bound on the optimal convergence speed of any algorithm. For functions with either bilipschitz or convex Pareto fronts, we also establish an upper bound and the gap is hence Θ(1/p). The presented bounds are not only asymptotic. In particular, functions with a linear Pareto front have the normalized exact gap of 1/(p + 1) for any reference point dominating the nadir point.
We empirically investigate on a small set of Pareto fronts the exact optimality gap for values of p up to 1000 and find in all cases a dependency resembling 1/(p + CONST).
In this paper, we revisit the distance-based subset selection (DSS) algorithm in evolutionary multi-objective optimization. First, we show one drawback of the DSS algorithm, i.e., a uniformly distributed solution set cannot always be selected. Then, we show that this drawback can be overcome by maximizing the uniformity level of the selected solution set, which is defined by the minimum distance between two solutions in the solution set. Furthermore, we prove that the DSS algorithm is a greedy inclusion algorithm with respect to the maximization of the uniformity level. Based on this conclusion, we generalize DSS as a subset selection problem where the objective is to maximize the uniformity level of the subset. In addition to the greedy inclusion DSS algorithm, a greedy removal algorithm and an iterative algorithm are proposed for the generalized DSS problem. We also extend the Euclidean distance in the original DSS to other widely-used and user-defined distances. We conduct extensive experiments on solution sets over different types of Pareto fronts to compare the three DSS algorithms with different distances. Our results suggest the usefulness of the generalized DSS for selecting a uniform subset. The effect of using different distances on the selected subsets is also analyzed.
Hypervolume subset selection (HSS) aims to select a subset from a candidate solution set so that the hypervolume of the selected subset is maximized. Due to its NP-hardness nature, the greedy algorithms are the most efficient for solving HSS in many-objective optimization. However, when the number of objectives is large, the calculation of the hypervolume contribution in the greedy HSS is time-consuming, which makes the greedy HSS inefficient. To solve this issue, in this paper we propose a greedy approximated HSS algorithm. The main idea is to use an R2-based hypervolume contribution approximation method in the greedy HSS. In the algorithm implementation, a utility tensor structure is introduced to facilitate the calculation of the hypervolume contribution approximation. In addition, the tensor information in the last step is utilized in the current step to accelerate the calculation. We also apply the lazy strategy in the proposed algorithm to further improve its efficiency. We test the greedy approximated HSS algorithm on 3-10 objective candidate solution sets. The experimental results show that the proposed algorithm is much faster than the state-of-the-art greedy HSS algorithm in many-objective optimization while their hypervolume performance is almost the same.
Realistic utility functions prove difficult for state-of-the-art interactive multiobjective optimization algorithms
Improvements to the design of interactive Evolutionary Multiobjective Algorithms (iEMOAs) are unlikely without quantitative assessment of their behaviour in realistic settings. Experiments with human decision-makers (DMs) are of limited scope due to the difficulty of isolating individual biases and replicating the experiment with enough subjects, and enough times, to obtain confidence in the results. Simulation studies may help to overcome these issues, but they require the use of realistic simulations of decision-makers. Machine decision-makers (MDMs) provide a way to carry out such simulation studies, however, studies so far have relied on simple utility functions. In this paper, we analyse and compare two state-of-the-art iEMOAs by means of a MDM that uses a sigmoid-shaped utility function. This sigmoid utility function is based on psychologically realistic models from behavioural economics, and replicates several realistic human behaviours. Our findings are that, on a variety of well-known benchmarks with two and three objectives, the two iEMOAs do not consistently recover the most-preferred points. We hope that these findings provide an impetus for more directed design and analysis of future iEMOAs.
This work proposes a Bayesian optimisation with Gaussian Process approach to learn decision maker (DM) preferences in the attribute search space of a multi-objective optimisation problem (MOP). The DM is consulted periodically during optimisation of the problem and asked to provide their preference over a series of pairwise comparisons of candidate solutions. After each consultation, the most preferred solution is used as the reference point in an appropriate multiobjective optimisation evolutionary algorithm (MOEA). The rationale for using Bayesian optimisation is to identify the most preferred location in the decision search space with the least number of DM queries, thereby minimising DM cognitive burden and fatigue. This enables non-expert DMs to be involved in the optimisation process and make more informed decisions. We further reduce the number of preference queries required, by progressively redefining the Bayesian search space to reflect the MOEA's decision bounds as it converges toward the Pareto Front. We demonstrate how this approach can locate a reference point close to an unknown preferred location on the Pareto Front, of both benchmark and real-world problems with relatively few pairwise comparisons.
Interactive evolutionary multiple objective optimization algorithm using a fast calculation of holistic acceptabilities
We propose a novel algorithm, called iMOEA-HA, for interactive evolutionary multiple objective optimization. It combines, in an original way, the state-of-the-art paradigms postulated in evolutionary multiple objective optimization and multiple criteria decision aiding. When it comes to the incorporated evolutionary framework, iMOEA-HA implements a steady-state, quasi decomposition-based model with a restricted mating pool. To drive the population towards the Decision Maker's (DM's) most preferred region in the Pareto front, iMOEA-HA exploits the DM's pairwise comparisons of solutions from the current population and assesses all solutions as imposed by their holistic acceptability indices. These acceptabilities are built on the probabilities of attaining a certain number of the most preferred ranks. We evaluate the proposed algorithm's performance on a vast set of optimization problems with different properties and numbers of objectives. Moreover, we compare iMOEA-HA with existing interactive evolutionary algorithms that can be deemed its predecessors, proving its high effectiveness and competitiveness.
The quality of solutions in multiobjective evolutionary algorithms (MOEAs) is usually evaluated by objective functions. However, function evaluations (FEs) are usually time-consuming in real-world problems. A large number of FEs limit the application of MOEAs. In this paper, we propose a fuzzy classifier-based selection strategy to reduce the number of FEs of MOEAs. First, all evaluated solutions in previous generations are used to build a fuzzy classifier. Second, the built fuzzy classifier is used to predict each unevaluated solution's label and its membership degree. The reproduction procedure is repeated to generate enough offspring solutions (classified as positive by the classifier). Next, unevaluated solutions are sorted based on their membership degrees in descending order. The same number of solutions as the population size are selected from the top of the sorted unevaluated solutions. Then, the best half of the chosen solutions are selected and stored in the new population without evaluations. The other half solutions are evaluated. Finally, the evaluated solutions are used together with evaluated current solutions for environmental selection to form another half of the new population. The proposed strategy is integrated into two MOEAs. Our experimental results demonstrate the effectiveness of the proposed strategy on reducing FEs.