GECCO '18- Proceedings of the Genetic and Evolutionary Computation ConferenceFull Citation in the ACM Digital Library
SESSION: Evolutionary combinatorial optimization and metaheuristics
Min-conflicts heuristic for multi-mode resource-constrained projects scheduling
We investigate solving of Multi-Mode Resource-Constrained Multiple Projects Scheduling Problem by heuristic techniques. A new method based on Min-Conflicts heuristic is proposed and evaluated. The main idea is to efficiently explore the neighborhood of current solution based on conflicts of activities that share the same resources. This technique is further used within the Iterated Local Search framework that additionally includes the perturbation and the acceptance criteria. Furthermore, we propose three novel project-wise neighborhood operators. Our method is evaluated on benchmark instances proposed in the MISTA conference challenge and compared to the state-of-the-art approaches. Our algorithm obtains competitive results to the solver ranked third in the MISTA challenge. We also applied our method on the existing benchmark instances for multiple-mode resource constrained single project scheduling problems. We provide six new upper bounds for well-known instances of the MMLIB library.
An effective hybrid meta-heuristic for a heterogeneous flow shop scheduling problem
In this paper, we study an extension of the non-permutation flow shop scheduling problem, where n jobs have to be scheduled on m machines, and m workers have to be assigned to operate the m machines. The workers are heterogeneous, that is, the processing time of a job processed on a machine depends on the assigned worker. The objective of the problem is to obtain the best worker allocation and the corresponding job schedule in order to minimize the maximum completion time (makespan). Motivated by the computational complexity of the problem, we propose a Variable Neighborhood Search (VNS) heuristic coupled with Iterated Greedy (IG) algorithm to obtain near optimal solutions. The VNS heuristic is responsible for defining the allocation of workers to machines and IG is responsible for obtaining a schedule of jobs on machines. The performance of our meta-heuristic, named VNS-IG, is compared with the state-of-the-art meta-heuristic proposed in the literature for the problem under study. The results show that our heuristic outperforms the existing algorithm by a significant margin.
Algorithm selection on generalized quadratic assignment problem landscapes
Algorithm selection is useful in decision situations where among many alternative algorithm instances one has to be chosen. This is often the case in heuristic optimization and is detailed by the well-known no-free-lunch (NFL) theorem. A consequence of the NFL is that a heuristic algorithm may only gain a performance improvement in a subset of the problems. With the present study we aim to identify correlations between observed differences in performance and problem characteristics obtained from statistical analysis of the problem instance and from fitness landscape analysis (FLA). Finally we evaluate the performance of a recommendation algorithm that uses this information to make an informed choice for a certain algorithm instance.
A multi-objective formulation of the team formation problem in social networks: preliminary results
The Team Formation Problem in Social Networks (TFP-SN) consists of finding a team of experts, from a social network, that better undertake a given task. It is mandatory for the team to meet the skill set required by the task and it is desired that the team members communicate effectively to achieve their goal. This problem was proven to be NP-hard for the optimization of different variants of a communication cost function. Even though, real-life instances of this problem involve the simultaneous optimization of two or more conflicting objectives, the studies of the TFP-SN under the multi-objective model has been rather scarce. In this work, we introduce the TFP-SN as a multi-objective optimization problem for the maximization of two conflicting objectives, the collaborative density and the team's ratio of expertise. We tackle this problem employing the NSGA-II framework, for which a proper representation and variation operators are proposed. Experimental results show that the proposed approach generates competitive solutions when compared with well-known heuristics for this problem. Additionally, as a response to the lack of benchmarks and to setup a baseline for future comparisons, we provide a detailed description of the generated instances.
Enhancing partition crossover with articulation points analysis
Partition Crossover is a recombination operator for pseudo-Boolean optimization with the ability to explore an exponential number of solutions in linear or square time. It decomposes the objective function as a sum of subfunctions, each one depending on a different set of variables. The decomposition makes it possible to select the best parent for each subfunction independently and the operator provides the best out of 2q solutions, where q is the number of sub-functions in the decomposition. These subfunctions are defined over the connected components of the recombination graph: a subgraph of the objective function variable interaction graph containing only the differing variables in the two parents. In this paper, we advance further and propose a new way to increase the number of linearly independent subfunctions by analyzing the articulation points of the recombination graph. These points correspond to variables that, once flipped, increase the number of connected components. The presence of a connected component with an articulation point increases the number of explored solutions by a factor of, at least, 4. We evaluate the new operator using Iterated Local Search combined with Partition Crossover to solve NK Landscapes and MAX-SAT.
A fitness landscape analysis of the travelling thief problem
Local Optima Networks are models proposed to understand the structure and properties of combinatorial landscapes. The fitness landscape is explored as a graph whose nodes represent the local optima (or basins of attraction) and edges represent the connectivity between them. In this paper, we use this representation to study a combinatorial optimisation problem, with two interdepend components, named the Travelling Thief Problem (TTP). The objective is to understand the search space structure of the TTP using basic local search heuristics and to distinguish the most impactful problem features. We create a large set of enumerable TTP instances and generate a Local Optima Network for each instance using two hill climbing variants. Two problem features are investigated, namely the knapsack capacity and profit-weight correlation. Our insights can be useful not only to design landscape-aware local search heuristics, but also to better understand what makes the TTP challenging for specific heuristics.
A heuristic algorithm based on tabu search for the solution of flexible job shop scheduling problems with lot streaming
The Job Shop Scheduling Problem (JSSP) has received a great attention from operation research and metaheuristic communities, because it constitutes a challenging optimization problem with many real-world applications. In this work, an extension of the simple JSSP operating mode is introduced, including flexibility (operations can be processed by several machines) and lot streaming (jobs may be divided into sublots).
An integer programming model is developed in this study for tackling the resulting FJSSP-LS. This model was implemented within a a commercial solver, GUROBI and additionally, a Tabu Search (TS) algorithm is proposed as a solution technique. Compared over a set of 48 instances adapted for the FJSSP-LS working mode, the TS heuristic clearly outperforms the upper bounds found by GUROBI, which is never able to converge to optimality in a one hour time limit. Moreover, result analysis enables to provide some insights and draw some guidelines regarding the use of flexibility and lot streaming within JSSP.
Escaping large deceptive basins of attraction with heavy-tailed mutation operators
In many evolutionary algorithms (EAs), a parameter that needs to be tuned is that of the mutation rate, which determines the probability for each decision variable to be mutated. Typically, this rate is set to 1/n for the duration of the optimization, where n is the number of decision variables. This setting has the appeal that the expected number of mutated variables per iteration is one.
In a recent theoretical study, it was proposed to sample the number of mutated variables from a power-law distribution. This results in a significantly higher probability on larger numbers of mutations, so that escaping local optima becomes more probable.
In this paper, we propose another class of non-uniform mutation rates. We study the benefits of this operator in terms of average-case black-box complexity analysis and experimental comparison. We consider both pseudo-Boolean artificial landscapes and combinatorial problems (the Minimum Vertex Cover and the Maximum Cut).
We observe that our non-uniform mutation rates significantly outperform the standard choices, when dealing with landscapes that exhibit large deceptive basins of attraction.
Improving the run time of the (1 + 1) evolutionary algorithm with luby sequences
In the context of black box optimization, one of the most common ways to handle deceptive attractors is to periodically restart the algorithm. In this paper, we explore the benefits of combining the simple (1 + 1) Evolutionary Algorithm (EA) with the Luby Universal Strategy - the (1 + 1) EAu, a meta-heuristic that does not require parameter tuning.
We first consider two artificial pseudo-Boolean landscapes, on which the (1 + 1) EA exhibits exponential run time. We prove that the (1 + 1) EAu has polynomial run time on both instances.
We then consider the Minimum Vertex Cover on two classes of graphs. Again, the (1 + 1) EA yields exponential run time on those instances, and the (1 + 1) EAu finds the global optimum in polynomial time.
We conclude by studying the Makespan Scheduling. We consider an instance on which the (1 + 1) EA does not find a (4/3 − ϵ)-approximation in polynomial time, and we show that the (1 + 1) EAu reaches a (4/3 − ϵ)-approximation in polynomial time. We then prove that the (1 + 1) EAu serves as an Efficient Polynomial-time Approximation Scheme (EPTAS) for the Partition Problem, for a (1 + ϵ)-approximation with ϵ > 4/n.
Randomized greedy algorithms for covering problems
Greedy algorithms provide a fast and often also effective solution to many combinatorial optimization problems. However, it is well known that they sometimes lead to low quality solutions on certain instances. In this paper, we explore the use of randomness in greedy algorithms for the minimum vertex cover and dominating set problem and compare the resulting performance against their deterministic counterpart. Our algorithms are based on a parameter y which allows to explore the spectrum between uniform and deterministic greedy selection in the steps of the algorithm and our theoretical and experimental investigations point out the benefits of incorporating randomness into greedy algorithms for the two considered combinatorial optimization problems.
A merge search algorithm and its application to the constrained pit problem in mining
Many large-scale combinatorial problems contain too many variables and constraints for conventional mixed-integer programming (MIP) solvers to manage. To make the problems easier for the solvers to handle, various meta-heuristic techniques can be applied to reduce the size of the search space, by removing, or aggregating, variables and constraints. A novel meta-heuristic technique is presented in this paper called merge search, which takes an initial solution and uses the information from a large population of neighbouring solutions to determine promising areas of the search space to focus on. The population is merged to produce a restricted sub-problem, with far fewer variables and constraints, which can then be solved by a MIP solver. Merge search is applied to a complex problem from open-pit mining called the constrained pit (CPIT) problem, and compared to current state-of-the-art results on well known benchmark problems minelib  and is shown to give better quality solutions in five of the six instances.
Dominance, epsilon, and hypervolume local optimal sets in multi-objective optimization, and how to tell the difference
Local search algorithms have shown good performance for several multi-objective combinatorial optimization problems. These approaches naturally stop at a local optimal set (LO-set) under given definitions of neighborhood and preference relation among subsets of solutions, such as set-based dominance relation, hypervolume or epsilon indicator. It is an open question how LO-sets under different set preference relations relate to each other. This paper reports an in-depth experimental analysis on multi-objective nk-landscapes. Our results reveal that, whatever the preference relation, the number of LO-sets typically increases with the problem non-linearity, and decreases with the number of objectives. We observe that strict LO-sets of bounded cardinality under set-dominance are LO-sets under both epsilon and hypervolume, and that LO-sets under hyper-volume are LO-sets under set-dominance, whereas LO-sets under epsilon are not. Nonetheless, LO-sets under set-dominance are more similar to LO-sets under epsilon than under hypervolume. These findings have important implications for multi-objective local search. For instance, a dominance-based approach with bounded archive gets more easily trapped and might experience difficulty to identify an LO-set under epsilon or hypervolume. On the contrary, a hypervolume-based approach is expected to perform more steps before converging to better approximations.
Evolutionary multi-level acyclic graph partitioning
Directed graphs are widely used to model data flow and execution dependencies in streaming applications. This enables the utilization of graph partitioning algorithms for the problem of parallelizing execution on multiprocessor architectures under hardware resource constraints. However due to program memory restrictions in embedded multiprocessor systems, applications need to be divided into parts without cyclic dependencies. This can be done by a subsequent second graph partitioning step with an additional acyclicity constraint.
We have two main contributions. First, we contribute a multilevel algorithm for the acyclic graph partitioning problem that achieves a 9% reduction of the edge cut compared to the previous single-level algorithm. Based on this, we engineer an evolutionary algorithm to further reduce the cut, achieving a 30% reduction on average compared to the state of the art.
We show that this can reduce the amount of communication for a real-world imaging application and thereby accelerate it by up to 5% on an embedded multiprocessor architecture. In addition, we demonstrate how a custom fitness function for the evolutionary algorithm can be used to optimize other objectives like load balancing if the communication volume is not predominantly important on a given hardware platform.
A two-level diploid genetic based algorithm for solving the family traveling salesman problem
In this paper, we consider the Family Traveling Salesman Problem (FTSP), which is a variant of the classical Traveling Salesman Problem (TSP). Given a partition of the nodes into a predefined number of clusters, called families, the aim of the FTSP is to find a minimum cost tour visiting a given number of nodes from each family. We describe a novel solution approach for solving the FTSP obtained by decomposing the problem into two smaller subproblems: a macro-level subproblem and a micro-level subproblem, and solving them separately. The goal of the first subproblem is to provide tours visiting the families using a classical genetic algorithm and a diploid genetic algorithm, while the aim of the second subproblem is to find the minimum-cost tour, corresponding to the above mentioned tours, visiting a given number of nodes from each family. The second subproblem is solved by transforming each global tour into a traveling salesman problem (TSP) which then is optimally computed using the Concorde TSP solver. The preliminary computational results on a usually used set of benchmark instances prove that our solution approach provides competitive solutions in comparison to the existing methods for solving the FTSP.
Memetic multilevel hypergraph partitioning
Hypergraph partitioning has a wide range of applications such as VLSI design or scientific computing. With focus on solution quality we develop the first multilevel memetic algorithm to tackle the problem. Key components of our contribution are new effective multilevel recombination and mutation operations that provide a large amount of diversity. We perform a wide range of experiments on a benchmark set containing instances from application areas such VLSI, SAT solving, social networks, and scientific computing. Compared to the state-of-the-art hypergraph partitioning tools hMetis, PaToH, and KaHyPar, our new algorithm computes the best results on almost all instances of the benchmark set.
Fitness landscape analysis around the optimum in computational protein design
The geometry and properties of the fitness landscapes of Computational Protein Design (CPD) are not well understood, due to the difficulty for sampling methods to access the NP-hard optima and explore their neighborhoods. In this paper, we enumerate all solutions within a 2 kcal/mol energy interval of the optimum of two CPD problems. We compute the number of local minima, the size of the attraction basins, and the local optima network. We provide various features in order to characterize the fitness landscapes, in particular the multimodality, and the ruggedness of the fitness landscape. Results show some key differences in the fitness landscapes and help to understand the successes and failures of metaheuristics on CPD problems. Our analysis gives some previously inaccessible and valuable information on the problem structure related to the optima of the CPD instances (multi-funnel structure), and could lead to the development of more efficient metaheuristic methods.
One-class constraint acquisition with local search
We propose One-Class Constraint Acquisition with Local Search (OCCALS), a novel method for computer-assisted acquisition of Mixed-Integer Linear Programming (MILP) models from examples. OCCALS is designed to help human experts in preparation of MILP models for their systems. OCCALS supports building MILP models from the examples of positive class only, thus requiring relatively cheap to acquire training set, e.g., by observing historical execution of a system. OCCALS effectively handles multimodal distribution of the training set that may happen in practice. We show experimentally the superiority of OCCALS to a state-of-the-art method.
Multifractality and dimensional determinism in local optima networks
We conduct a study of local optima networks (LONs) in a search space using fractal dimensions. The fractal dimension (FD) of these networks is a complexity index which assigns a non-integer dimension to an object. We propose a fine-grained approach to obtaining the FD of LONs, using the probabilistic search transitions encoded in LON edge weights. We then apply multi-fractal calculations to LONs for the first time, comparing with mono-fractal analysis. For complex systems such as LONs, the dimensionality may be different between two sub-systems and multi-fractal analysis is needed. Here we focus on the Quadratic Assignment Problem (QAP), conducting fractal analyses on sampled LONs of reasonable size for the first time. We also include fully enumerated LONs of smaller size. Our results show that local optima spaces can be multi-fractal and that valuable information regarding probabilistic self-similarity is encoded in the edge weights of local optima networks. Links are drawn between these phenomena and the performance of two competitive metaheuristic algorithms.
Iterated greedy algorithms for the hybrid flowshop scheduling with total flow time minimization
The hybrid flowshop scheduling problem (HFSP) has been extensively studied in the literature, due to its complexity and real-life applicability. Various exact and heuristic algorithms have been developed for the HFSP, and most consider makespan as the only criterion. The studies on HFSP with the objective of minimizing total flow time have been rather limited. This paper presents a mathematical model and efficient iterated greedy algorithms, IG and IGALL, for the HFSP with total flow time criterion. In order to evaluate the performance of the proposed IG algorithms, the well-known HFSP benchmark suite from the literature is used. As the problem is NP-hard, the proposed mathematical model is solved for all 87 instances under a time limit on CPLEX. Optimal results are obtained for some of these instances. The performance of the IG algorithms is measured by comparisons with these time-limited CPLEX results of the mathematical model. Computational results show that the proposed IG algorithms perform very well in terms of solution time and quality. To the best of our knowledge, for the first time in the literature, the results of flow time criterion have been reported for the HFSP benchmark suite.