GECCO '18- Proceedings of the Genetic and Evolutionary Computation ConferenceFull Citation in the ACM Digital Library
SESSION: Digital entertainment technologies and arts
Generating beginner heuristics for simple texas hold'em
Beginner heuristics for a game are simple rules that allow for effective playing. A chain of beginner heuristics of length N is the list of N rules that play the game best. Finding beginner heuristics is useful both for teaching a novice to play the game well and for understanding the dynamics of the game. We present and compare methods for finding beginner heuristics in a simple version of Poker: Pre-Flop Heads-Up Limit Texas Hold'em. We find that genetic programming outperforms greedy-exhaustive search and axis-aligned search in terms of finding well-playing heuristic chains of given length. We also find that there is a limited amount of non-transitivity when playing beginner heuristics of different lengths against each other, suggesting that while simpler heuristics are somewhat general, the more complex seem to overfit their training set.
Opponent modeling and exploitation in poker using evolved recurrent neural networks
As a classic example of imperfect information games, Heads-Up No-limit Texas Holdem (HUNL) has been studied extensively in recent years. While state-of-the-art approaches based on Nash equilibrium have been successful, they lack the ability to model and exploit opponents effectively. This paper presents an evolutionary approach to discover opponent models based on recurrent neural networks (LSTM) and Pattern Recognition Trees. Experimental results showed that poker agents built in this method can adapt to opponents they have never seen in training and exploit weak strategies far more effectively than Slumbot 2017, one of the cutting-edge Nash-equilibrium-based poker agents. In addition, agents evolved through playing against relatively weak rule-based opponents tied statistically with Slumbot in heads-up matches. Thus, the proposed approach is a promising new direction for building high-performance adaptive agents in HUNL and other imperfect information games.
Generating a melody based on symbiotic evolution for musicians' creative activities
When musicians are asked to create music with a special theme or for a particular use, it is important for them to meet the client's demands and express their original creativity. As this task is a little more difficult than their free creative activities, an automatic music composition system may support them. Requirements for the system are the ease of expressing the musicians' intention and the shortness of the processing time. In this paper, a scene where musicians create a song using an automatic composition system is assumed, and a method of generating a melody for the system is proposed. In the proposed method, two kinds of sensibility models for melody are induced from the existing music. A melody template is generated on the basis of those models using a symbiotic evolution algorithm. A melody is completed by determining pitch of each note in the melody template. The results of the subjective evaluation experiment show the effectiveness of the proposed method. In addition, two case studies of works with professional musicians are also presented. The results of questionnaire surveys show that the composed music reflects the musicians' intentions and is not inferior to music composed by human composers.
Evolving indirectly encoded convolutional neural networks to play tetris with low-level features
Tetris is a challenging puzzle game that has received much attention from the AI community, but much of this work relies on intelligent high-level features. Recently, agents played the game using low-level features (10 X 20 board) as input to fully connected neural networks evolved with the indirect encoding HyperNEAT. However, research in deep learning indicates that convolutional neural networks (CNNs) are superior to fully connected networks in processing visuospatial inputs. Therefore, this paper uses HyperNEAT to evolve CNNs. The results indicate that CNNs are indeed superior to fully connected neural networks in Tetris, and identify several factors that influence the successful evolution of indirectly encoded CNNs.
Querying across time to interactively evolve animations
Compositional Pattern Producing Networks (CPPNs) are a generative encoding that has been used to evolve a variety of novel artifacts, such as 2D images, 3D shapes, audio timbres, soft robots, and neural networks. This paper takes systems that generate static 2D images and 3D shapes with CPPNs and introduces a time input, allowing each CPPN to produce a different set of results for each slice of time. Displaying the results in sequence creates smooth animations that can be interactively evolved to suit users' personal aesthetic preferences. A human subject study involving 40 individuals was conducted to demonstrate that people find the dynamic animations more complex than static outputs, and find interactive evolution of animations more enjoyable than evolution of static outputs. The novel idea of indirectly generating artifacts as a function of time could also be useful in other domains.
Evolving mario levels in the latent space of a deep convolutional generative adversarial network
Generative Adversarial Networks (GANs) are a machine learning approach capable of generating novel example outputs across a space of provided training examples. Procedural Content Generation (PCG) of levels for video games could benefit from such models, especially for games where there is a pre-existing corpus of levels to emulate. This paper trains a GAN to generate levels for Super Mario Bros using a level from the Video Game Level Corpus. The approach successfully generates a variety of levels similar to one in the original corpus, but is further improved by application of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Specifically, various fitness functions are used to discover levels within the latent space of the GAN that maximize desired properties. Simple static properties are optimized, such as a given distribution of tile types. Additionally, the champion A* agent from the 2009 Mario AI competition is used to assess whether a level is playable, and how many jumping actions are required to beat it. These fitness functions allow for the discovery of levels that exist within the space of examples designed by experts, and also guide the search towards levels that fulfill one or more specified objectives.
Evolving simple programs for playing atari games
Cartesian Genetic Programming (CGP) has previously shown capabilities in image processing tasks by evolving programs with a function set specialized for computer vision. A similar approach can be applied to Atari playing. Programs are evolved using mixed type CGP with a function set suited for matrix operations, including image processing, but allowing for controller behavior to emerge. While the programs are relatively small, many controllers are competitive with state of the art methods for the Atari benchmark set and require less training time. By evaluating the programs of the best evolved individuals, simple but effective strategies can be found.