Interview with Yoshua Bengio, Pioneer of AI

The Future of AI

 TT              You are really ahead of this…So what can you see of the future?

 YB             I don’t see the future.

 TT              Really? I thought that you could see a future we cannot see yet.

 YB             No, I have research goals and I have chosen to explore particular directions because I believe in them, so the vision I have is that a big missing piece in our current machine learning systems is what people call common sense. So think about the intelligence of a two-year-old or the intelligence of a cat, we don’t even have that in machines right now. And that’s not even starting to think about things like language where we’re seriously lacking, especially when you look at the mistakes made by current state-of-the-art systems. That sort of common sense understanding includes things that current machine learning doesn’t do, like understanding cause and effect, and discovering cause and effect. It includes a broad understanding of the world, not just one specialised task. It includes the ability to discover this model of the world through unsupervised exploration. We rely today heavily on supervised learning where all of the high-level concepts have been defined by a human teacher or human labels. There are lots of aspects of intelligence that are currently at the frontier, and I’m not the only one exploring these things, which could make a big difference in a few years, but it’s hard to be sure.

 TT              Is your goal to create a human or a superhuman?

 YB             No, my goal is to understand general principles of intelligence, how an agent can become intelligent. I and many others would like to discover the equivalent of the laws of physics, but for intelligence.

 TT              Yes.

 YB             And presumably those principles would apply to humans, to animals, to aliens, who might be intelligent, to machines that we can build, so these would be very general principles and machine learning has already established some of those principles, but we’re still missing some important ones, I believe.

 TT              But you’ve found a simple principle?

 YB             Yes, several. But behind this, there’s a meta-principle, a scientific hypothesis, that intelligence could be explained by a few simple principles. We don’t know if that hypothesis is true, but the success of deep learning in the last few years is a good validation of that hypothesis. It’s consistent with that hypothesis because deep learning is built on a few very simple principles. Most of the complexity of the systems that are trained with deep learning is not in the learning mechanismsp; it’s in the data. The data contains the overwhelming share of the information in a current trained AI system, while only a little bit of information, relatively speaking, is in those principles, which are like the learning procedures. 

About Toshie Takahashi

Toshie Takahashi is Professor in the School of Culture, Media and Society, as well as the Institute for Al and Robotics,Waseda University, Tokyo. She was the former faculty Associate at the Harvard Berkman Klein Center for Internet & Society. She has held visiting appointments at the University of Oxford and the University of Cambridge as well as Columbia University. She conducts cross-cultural and trans-disciplinary research on the social impact of robots as well as the potential of AI for Social Good. 【早稲田大学文学学術院教授。元ハーバード大学バークマンクライン研究所ファカルティ・アソシエイト。現在、人工知能の社会的インパクトやロボットの利活用などについて、ハーバード大学やケンブリッジ大学と国際共同研究を行っている。東京オリンピック・パラリンピック競技大会組織委員会テクノロジー諮問委員会委員。】
This entry was posted in AGI, AI, biased data, cross-disciplinary approach, Culture and Communication, deep learning, education, future society, opportunities, risks, Robot. Bookmark the permalink.