TOC GECCO 2021 Keynotes

GECCO '21: Proceedings of the Genetic and Evolutionary Computation Conference

GECCO '21: Proceedings of the Genetic and Evolutionary Computation Conference

Full Citation in the ACM Digital Library

Reverse-engineering core common sense with the tools of probabilistic programs, game-style simulation engines, and inductive program synthesis

  • Joshua Tenenbaum

None of today's AI systems or approaches comes anywhere close to capturing the common sense of a toddler, or even a 3-month old infant. I will talk about some of the challenges facing conventional machine learning paradigms, such as end-to-end unsupervised learning in deep networks and deep reinforcement learning, and discuss some initial, small steps we have taken with an alternative cognitively-inspired AI approach. This requires us to develop a different engineering toolset, based on probabilistic programs, game-style simulation programs as general-purpose startup software (or "the game engine in the head"), and learning as programming (or "the child as hacker").

Statistical physics and statistical inference

  • Marc M├ęzard

A major challenge of contemporary statistical inference is the large-scale limit, where one wants to discover the values of many hidden parameters, using large amount of data. In recent years, ideas from statistical physics of disordered systems have helped to develop new algorithms for important inference problems, ranging from community detection to compressed sensing, machine learning (notably neural networks), tomography and generalized linear regression. The talk will review these developments and explain how they can be used, to develop new types of algorithms and identify phase transitions.

Why AI is harder than we think

  • Melanie Mitchell

Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI Spring") and periods of disappointment, loss of confidence, and reduced funding ("AI Winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this talk I will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I will also speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable --- in short, more intelligent.