GECCO '18- Proceedings of the Genetic and Evolutionary Computation ConferenceFull Citation in the ACM Digital Library
SESSION: Keynote talk
Although AI is interested, there is little discussion on why AI is necessary, and on what kind of world you want to create with AI. In this talk, I will focus on this why and the technology we developed to answer this question. The reason for requiring AI is not because AI technology has been advanced. The reason why AI is necessary is that demand requires flexibility responding to change and diversity, whereas standardization and lateral development of work since Taylor cannot meet this requirement. This is called the conversion from "rule-oriented" to "outcome-oriented." AI is an important tool for making society outcome-oriented. We have developed AI (we call this "multipurpose AI") which enables flexible action according to circumstances for given outcome. It has already been utilized in more than 60 business cases in areas, such as financial, distribution, energy, and transportation.
To enable flexible action, a consistent objective is required. One objective has a higher objective. The higher is the level in this hierarchy, the higher its value and consistency. "The happiness of the society," however, is positioned at the top in any problem. We have discovered a method to quantify the happiness of people from delicate movements of the body using accelerometer. And we also have developed AI technology that supports enhancement of people's happiness according to circumstances. This paves a way to the new capitalism that judges everything for human happiness. From a society seeking to follow uniform rules in the past, it is possible to realize a society in which flowers are bloomed at the place where the person is located. In this presentation, I would like to show the whole picture of this new society.
Around 2010, brain machine interface (or brain computer interface), which can control a machine (a computer) by human thought, was one of hot topics in computer science area. The basic technical components of brain machine interface are sensing bio signal, i.e. brain activity, analyzing bio signal data and controlling a machine based on the analyzed data. Many institutes and companies joined competitions to show the results of brain machine interface and its applications. However, a few products were released based on these competitions. The main reasons are insufficient performance against product level and no proper applications. After these competitions, the progress of exploitation of bio signal data was little bit saturated.
Recently, exploitation of bio signal data to estimate human state is becoming a hot topic again because of technological aspect and social needs. The performance of machine learning including deep learning proposed by Hinton is dramatically improving and the progress of computer resources, ex. GPGPU, allows us to treat huge data and carry out huge calculation. After showing the surprising performance of deep learning in the field of computer vision, the exploitation of bio signal data is revisited from machine learning point of view.
Furthermore, since possibility of autonomous driving and advanced driver assistant system based on AI technology is becoming more realistic, estimation of human state, i.e. a driver, is starting to gather our attention because except for L4/L5 autonomous driving, transfer of responsibility for control of car from a car to a driver or vise versa, so called 'takeover' or 'handover', should be considered. In order to transfer the right properly, we have to consider several strategies according to a human (human) state, ex. normal transfer, warning, or emergency stop without transfer.
In my presentation, the history of exploitation of bio signal data including brain machine interface will be explained and several hot topics around bio signal exploitation, ex. combination of AI and brain machine interface, will be shown. Finally, we would like to discuss the future topics around bio signal exploitation.
Based on the experience onboard the International Space Station (ISS) and Space Shuttle Discovery, the way we are trained on the ground and we live in space will be introduced.
In recent space vehicles, human-machine interfaces have been developing. A validity of robots has been already researched and verified in some cases, such as, becoming a conversation partner in spaceship, supporting to take a video, doing extravehicular activity, and robots will eventually play more important roles.
To make robots cooperate with humans on an equal basis or to replace humans for robots, considering sharing responsibilities, what we need is not only an improvement, but also an innovation. When trying to make an innovation, it is important to look ahead beyond a few generations. Even the technologies that seem to be absurd dreams may cause breakthroughs. Since I serve as an Executive Committee Member of "World Robot Summit" held in Japan in 2020, I would like to introduce its challenges as well.
I would like to emphasize the importance of wide "teamwork" including human beings, robots and computers and their interfaces.
The tension between theory, experiment, and practice plays out in genetic and evolutionary computation (GEC) as it plays out in other areas of science and technology. Back in the 80s, 90s, and 00s, I was always compelled to mix theory, experiment, and practical application in vigorous ways to achieve both understanding and effective computation, but my methodology often seemed to irritate more people than it satisfied. Theoreticians didn't think the work was quite "proper theory", and experimentalists/practitioners didn't think the work was sufficiently "real worldly." Although these concerns were always present in my GEC work, I haven't been thinking about them specifically over the last few years. Since resigning my tenure in 2010, I've been on a global quest to improve engineering education, a quest described in the book, A Whole New Engineer (www.wholenewengineer.org), and partially as a result of that journey, I think I can now better articulate some of the intuitions that led to the methodology of my earlier GEC career.
I start philosophically by sharing some of Don Schön's thoughts about the epistemology of practice. He asks, how is it, that practitioners, whether they be physicians, architects, engineers, accounts, computer scientists, or even physical scientists, know things in practice? The conventional wisdom, Schön claims, is that practitioners know things by first, mastering a body of well understood and accepted theory, then applying that theory in practice. Schön calls this theory of practical knowing, technical rationality, and he claims that it (1) is the dominant paradigm of epistemology of practice and that (2) it is largely mistaken (or at least, incomplete and misleading). As an alternative, he suggests that practitioners come to know through a process of reflection-in-action, and the talk discusses some of the key ideas behind this model of practice.
Thereafter, I revisit two case studies in early GEC work, the idea and use of deception and the idea and use of approximate little models through the lenses of technical rationality and reflection-in-action. The aim of this examination is to better understand the objections to and the intentions of the work, both. These are found to line up nicely along Schön's lines. Thereafter, I introduce Barry Johnson's notion of a polarity, and frame technical rationality and reflection-in-action. Johnson suggests that polarities are often regarded as solutions, but suggests that the appropriate stance is that poles must be managed. Here I suggest that the complexity of GEC demands the development of a population of reflective practitioners who actively manage the polarities of technical rationality and reflection-in-action, both. The talk discusses some of the key practices, particularly conversational practices, that can help do this.
The talk concludes with some theoretical and practical observations regarding the education of A Whole New Engineer and what these might offer the educators and education of the next generation of genetic algorithmists and evolutionary computationers.