GECCO '19- Proceedings of the Genetic and Evolutionary Computation ConferenceFull Citation in the ACM Digital Library
SESSION: Evolutionary multiobjective optimization
A parameterless performance metric for reference-point based multi-objective evolutionary algorithms
Most preference-based multi-objective evolutionary algorithms use reference points to articulate the decision maker's preferences. Since these algorithms typically converge to a sub-region of the Pareto-optimal front, the use of conventional performance measures (such as hypervolume and inverted generational distance) may lead to misleading results. Therefore, experimental studies in preference-based optimization often resort to using graphical methods to compare various algorithms. Though a few ad-hoc measures have been proposed in the literature, they either fail to generalize or involve parameters that are non-intuitive for a decision maker. In this paper, we propose a performance metric that is simple to implement, inexpensive to compute, and most importantly, does not involve any parameters. The so called expanding hypercube metric has been designed to extend the concepts of convergence and diversity to preference optimization. We demonstrate its effectiveness through constructed preference solution sets in two and three objectives. The proposed metric is then used to compare two popular reference-point based evolutionary algorithms on benchmark optimization problems up to 20 objectives.
Surrogate-assisted multiobjective optimization based on decomposition: a comprehensive comparative analysis
A number of surrogate-assisted evolutionary algorithms are being developed for tackling expensive multiobjective optimization problems. On the one hand, a relatively broad range of techniques from both machine learning and multiobjective optimization can be combined for this purpose. Different taxonomies exist in order to better delimit the design choices, advantages and drawbacks of existing approaches. On the other hand, assessing the relative performance of a given approach is a difficult task, since it depends on the characteristics of the problem at hand. In this paper, we focus on surrogate-assisted approaches using objective space decomposition as a core component. We propose a refined and fine-grained classification, ranging from EGO-like approaches to filtering or pre-screening. More importantly, we provide a comprehensive comparative study of a representative selection of state-of-the-art methods, together with simple baseline algorithms. We rely on selected benchmark functions taken from the bbob-biobj benchmarking test suite, that provides a variable range of objective function difficulties. Our empirical analysis highlights the effect of the available budget on the relative performance of each approach, and the impact of the training set and of the machine learning model construction on both solution quality and runtime efficiency.
Research has shown that for many single-objective graph problems where optimum solutions are composed of low weight sub-graphs, such as the minimum spanning tree problem (MST), mutation operators favoring low weight edges show superior performance. Intuitively, similar observations should hold for multi-criteria variants of such problems. In this work, we focus on the multi-criteria MST problem. A thorough experimental study is conducted where we estimate the probability of edges being part of non-dominated spanning trees as a function of the edges' non-domination level or domination count, respectively. Building on gained insights, we propose several biased one-edge-exchange mutation operators that differ in the used edge-selection probability distribution (biased towards edges of low rank). Our empirical analysis shows that among different graph types (dense and sparse) and edge weight types (both uniformly random and combinations of Euclidean and uniformly random) biased edge-selection strategies perform superior in contrast to the baseline uniform edge-selection. Our findings are in particular strong for dense graphs.
In recent years, quality indicators (QIs) have been employed to design selection mechanisms for multi-objective evolutionary algorithms (MOEAs). These indicator-based MOEAs (IB-MOEAs) generate Pareto front approximations that present convergence and diversity characteristics strongly related to the QI that guides the selection mechanism. However, on complex multi-objective optimization problems, the performance of IB-MOEAs is far from being completely understood. In this paper, we empirically analyze the convergence and diversity properties of five steady-state IB-MOEAs based on the hypervolume, R2, IGD+, ∈+, and Δp. Regarding convergence, we analyze their speed of convergence and the final closeness to the true Pareto front. The IB-MOEAs adopted in our study were tested on problems having different Pareto front shapes, and were taken from six test suites. Our experimental results show general and particular strengths and weaknesses of the adopted IB-MOEAs. We believe that these results are the first step towards a deeper understanding of the behavior of IB-MOEAs.
The hypervolume (or S-metric) is a widely used quality measure employed in the assessment of multi- and many-objective evolutionary algorithms. It is also directly integrated as a component in the selection mechanism of some popular optimisers. Exact hypervolume calculation becomes prohibitively expensive in real-time applications as the number of objectives increases and/or the approximation set grows. Assuch, Monte Carlo (MC) sampling is often used to estimate its value rather than exactly calculating it. This estimation is inevitably subject to error. As standard with Monte Carlo approaches, the standard error decreases with the square root of the number of MC samples. We propose a number of real-time hypervolume estimation methods for unconstrained archives --- principally for use in real-time convergence analysis. Furthermore, we show how the number of domination comparisons can be considerably reduced by exploiting incremental properties of the approximated Pareto front. In these methods the estimation error monotonically decreases over time for (i) a capped budget of samples per algorithm generation and (ii) a fixed budget of dedicated computation time per optimiser generation for new MC samples. Results are provided using an illustrative worst-case scenario with rapid archive growth, demonstrating the orders-of-magnitude of speed-up possible.
In optimiser analysis and design it is informative to visualise how a search point/population moves through the design space over time. Visualisable distance-based many-objective optimisation problems have been developed whose design space is in two-dimensions with arbitrarily many objective dimensions. Previous work has shown how disconnected Pareto sets may be formed, how problems can be projected to and from arbitrarily many design dimensions, and how dominance resistant regions of design space may be defined. Most recently, a test suite has been proposed using distances to lines rather than points. However, active use of visualisable problems has been limited. This may be because the type of problem characteristics available has been relatively limited compared to many practical problems (and non-visualisable problem suites). Here we introduce the mechanisms required to embed several widely seen problem characteristics in the existing problem framework. These include variable density of solutions in objective space, landscape discontinuities, varying objective ranges, neutrality, and non-identical disconnected Pareto set regions. Furthermore, we provide an automatic problem generator (as opposed to hand-tuned problem definitions). The flexibility of the problem generator is demonstrated by analysing the performance of popular optimisers on a range of sampled instances.
Multi-Objective Evolutionary Algorithms (MOEAs) have shown to be effective, addressing Multi-Objective Problems (MOPs) suitably. Nowadays, there is a variety of MOEAs proposed in the literature. However, it is a challenge to select the best MOEA within a specific domain problem, since the MOEAs present different performance depending on the problem characteristics. Moreover, it is difficult to configure the parameter values or to select the operators properly. Considering this, we propose a new hyper-heuristic approach based on the cooperation of MOEAs (HH-CO). The main characteristic of HH-CO is that every MOEA has a population and exchanges information between them. This paper presents experimental results of HH-CO for the benchmark from CEC'18 competition on many-objective optimization. We present two comparisons: the first one, where HH-CO is compared to each MOEA that composes the pool and to a state-of-the-art hyper-heuristic, and the second one, that compares HH-CO to state-of-the-art algorithms winners of CEC'18 competition. The results were evaluated using a set of quality indicators and were statistically analyzed. The conclusion is that HH-CO is a suitable approach mainly for challenging problems, with complicated fitness landscape, including multi-modality, bias and a high number of objectives.
Convex quadratic objective functions are an important base case in state-of-the-art benchmark collections for single-objective optimization on continuous domains. Although often considered rather simple, they represent the highly relevant challenges of non-separability and ill-conditioning. In the multi-objective case, quadratic benchmark problems are under-represented. In this paper we analyze the specific challenges that can be posed by quadratic functions in the bi-objective case. Our construction yields a full factorial design of 54 different problem classes. We perform experiments with well-established algorithms to demonstrate the insights that can be supported by this function class. We find huge performance differences, which can be clearly attributed to two root causes: non-separability and alignment of the Pareto set with the coordinate system.
In model-based evolutionary algorithms (EAs), the underlying search distribution is adapted to the problem at hand, for example based on dependencies between decision variables. Hill-valley clustering is an adaptive niching method in which a set of solutions is clustered such that each cluster corresponds to a single mode in the fitness landscape. This can be used to adapt the search distribution of an EA to the number of modes, exploring each mode separately. Especially in a black-box setting, where the number of modes is a priori unknown, an adaptive approach is essential for good performance. In this work, we introduce multi-objective hill-valley clustering and combine it with MAMaLGaM, a multi-objective EA, into the multi-objective hill-valley EA (MO-HillVallEA). We empirically show that MO-HillVallEA outperforms MAMaLGaM and other well-known multi-objective optimization algorithms on a set of benchmark functions. Furthermore, and perhaps most important, we show that MO-HillVallEA is capable of obtaining and maintaining multiple approximation sets simultaneously over time.
Multiple sequence alignment (MSA) is a basic step in many analyses in bioinformatics, including predicting the structure and function of proteins, orthology prediction and estimating phylogenies. The objective of MSA is to infer the homology among the sequences of chosen species. Commonly, the MSAs are inferred by optimizing a single objective function. The alignments estimated under one criterion may be different to the alignments generated by other criteria, inferring discordant homologies and thus leading to different evolutionary histories relating the sequences. In the recent past, researchers have advocated for the multi-objective formulation of MSA, to address this issue, where multiple conflicting objective functions are being optimized simultaneously to generate a set of alignments. However, no theoretical or empirical justification with respect to a real-life application has been shown for a particular multi-objective formulation. In this study, we investigate the impact of multi-objective formulation in the context of phylogenetic tree estimation. In essence, we ask the question whether a phylogeny-aware metric can guide us in choosing appropriate multi-objective formulations. Employing evolutionary optimization, we demonstrate that trees estimated on the alignments generated by multi-objective formulation are substantially better than the trees estimated by the state-of-the-art MSA tools, including PASTA, T-Coffee, MAFFT etc.
Various distance minimization problems (DMPs) have been proposed to visualize the search behaviors of evolutionary multiobjective optimization (EMO) algorithms in solving many-objective problems, multiobjective multimodal problems, and dynamic multiobjective problems. Among those DMPs, only the box constraints are considered. In this paper, we propose several constraint DMPs to visualize the behaviors of EMO algorithms with constraint handling techniques. In the proposed constraint DMPs, constraints are simply specified in the two-dimensional decision space. In the same manner, high-dimensional problems and multimodal problems can also be generated. Computational experiments show different behaviors by different algorithms in various constraint DMPs.
In the last decade, several evolutionary algorithms have been proposed in the literature for solving multi- and many-objective optimization problems. The performance of such algorithms depends on their capability to produce a well-diversified front (diversity) that is as closer to the Pareto optimal front as possible (proximity). Diversity and proximity strongly depend on the geometry of the Pareto front, i.e., whether it forms a Euclidean, spherical or hyperbolic hypersurface. However, existing multi- and many-objective evolutionary algorithms show poor versatility on different geometries. To address this issue, we propose a novel evolutionary algorithm that: (1) estimates the geometry of the generated front using a fast procedure with O(M × N) computational complexity (M is the number of objectives and N is the population size); (2) adapts the diversity and proximity metrics accordingly. Therefore, to form the population for the next generation, solutions are selected based on their contribution to the diversity and proximity of the non-dominated front with regards to the estimated geometry. Computational experiments show that the proposed algorithm outperforms state-of-the-art multi and many-objective evolutionary algorithms on benchmark test problems with different geometries and number of objectives (M=3,5, and 10).
Finding effective ways to encode data into DNA is not an easy task since the search space grows exponentially respect to the length of the oligonucleotides (single DNA strands) used in the task. In short, the problem of DNA Codeword Design (CD) is the optimization problem of finding large sets of short oligonucleotides which satisfy certain non-crosshybridizing constraints (combinatorial and/or thermodynamic). In this paper, a Multi-objective Evolutionary Algorithm (MoEA-CD) that exploits the structural properties of the DNA space to improve the speed and quality of candidate solutions to the CD problem is proposed. To this purpose, we consider the CD problem as a set covering problem, and we introduce two approximations mechanisms for computing the coverage and overlap of sets in such DNA space. The algorithm is tested in different DNA space dimensions, and our results indicate that MoEA-CD using such approximation mechanism maintains an excellent performance as the dimension of the search space is increased.
Since around 2000, it has been considered that elitist evolutionary multi-objective optimization algorithms (EMOAs) always outperform non-elitist EMOAs. This paper revisits the performance of non-elitist EMOAs for bi-objective continuous optimization when using an unbounded external archive. This paper examines the performance of EMOAs with two elitist and one non-elitist environmental selections. The performance of EMOAs is evaluated on the bi-objective BBOB problem suite provided by the COCO platform. In contrast to conventional wisdom, results show that non-elitist EMOAs with particular crossover methods perform significantly well on the bi-objective BBOB problems with many decision variables when using the unbounded external archive. This paper also analyzes the properties of the non-elitist selection.
Archiver effects on the performance of state-of-the-art multi- and many-objective evolutionary algorithms
Early works on external solution archiving have pointed out the benefits of unbounded archivers and there have been great advances, theoretical and algorithmic, in bounded archiving methods. Moreover, recent work has shown that the populations of most multi- and many-objective evolutionary algorithms (MOEAs) lack the properties that one would desire when trying to find a bounded Pareto-optimal front. Despite all these results, many recent MOEAs are still being proposed, analyzed and compared without considering any kind of archiver assuming their additional computational cost is not justified. In this paper, we investigate the effect of using various kinds of archivers, improving over previous studies in several aspects: (i) the parameters of MOEAs with and without an external archiver are tuned separately using automatic configuration methods; (ii) we consider a comprehensive range of problem scenarios (number of objectives, function evaluations, computation time limit); (iii) we employ multiple, complementary quality metrics; and (iv) we study the effect of unbounded archivers and two state-of-the-art bounded archiving methods. Our results show that both unbounded and bounded archivers are beneficial even for many-objective problems. We conclude that future proposals and comparisons of MOEAs must include archiving as an algorithmic component.
We propose a novel robust indicator-based algorithm, called IEMO/I, for interactive evolutionary multiple objective optimization. During the optimization run, IEMO/I selects at regular intervals a pair of solutions from the current population to be compared by the Decision Maker. The successively provided holistic judgements are employed to divide the population into fronts of potential optimality. These fronts are, in turn, used to bias the evolutionary search toward a subset of Pareto-optimal solutions being most relevant to the Decision Maker. To ensure a fine approximation of such a subset, IEMO/I employs a hypervolume metric within a steady-state indicator-based evolutionary framework. The extensive experimental evaluation involving a number of benchmark problems confirms that IEMO/I is able to construct solutions being highly preferred by the Decision Maker after a reasonable number of interactions. We also compare IEMO/I with some selected state-of-the-art interactive evolutionary hybrids incorporating preference information in form of pairwise comparisons, proving its competitiveness.
We present a framework to build a multiobjective algorithm from single-objective ones. This framework addresses the p × n-dimensional problem of finding p solutions in an n-dimensional search space, maximizing an indicator by dynamic subspace optimization. Each single-objective algorithm optimizes the indicator function given p - 1 fixed solutions. Crucially, dominated solutions minimize their distance to the empirical Pareto front defined by these p - 1 solutions. We instantiate the framework with CMA-ES as single-objective optimizer. The new algorithm, COMO-CMA-ES, is empirically shown to converge linearly on bi-objective convex-quadratic problems and is compared to MO-CMA-ES, NSGA-II and SMS-EMOA.
Despite a large interest in real-world problems from the research field of evolutionary optimisation, established benchmarks in the field are mostly artificial. We propose to use game optimisation problems in order to form a benchmark and implement function suites designed to work with the established COCO benchmarking framework. Game optimisation problems are real-world problems that are safe, reasonably complex and at the same time practical, as they are relatively fast to compute. We have created four function suites based on two optimisation problems previously published in the literature (TopTrumps and MarioGAN). For each of the applications, we implemented multiple instances of several scalable single- and multi-objective functions with different characteristics and fitness landscapes. Our results prove that game optimisation problems are interesting and challenging for evolutionary algorithms.
A multi-point mechanism of expected hypervolume improvement for parallel multi-objective bayesian global optimization
The technique of parallelization is a trend in the field of Bayesian global optimization (BGO) and is important for real-world applications because it can make full use of CPUs and speed up the execution times. This paper proposes a multi-point mechanism of the expected hypervolume improvement (EHVI) for multi-objective BGO (MOBGO) by the utilization of the truncated EHVI (TEHVI). The basic idea is to divide the objective space into several sub-objective spaces and then search for the optimal solutions in each sub-objective space by using the TEHVI as the infill criterion. We studied the performance of the proposed algorithm and performed comparisons with Kriging believer technique (KB) on five scientific benchmarks and a real-world application problem (i.e., a low-fidelity multi-objective airfoil optimization design). The stochastic experimental results show that the proposed algorithm performs better than the KB with respect to the hypervolume indicator, indicating that the proposed method provides an efficient parallelization technique for MOBGO.