{"id":1291,"date":"2019-06-17T21:37:10","date_gmt":"2019-06-17T21:37:10","guid":{"rendered":"http:\/\/blogs.harvard.edu\/toshietakahashi\/?p=1291"},"modified":"2019-06-17T21:42:05","modified_gmt":"2019-06-17T21:42:05","slug":"interview-with-yoshua-bengio","status":"publish","type":"post","link":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/2019\/06\/17\/interview-with-yoshua-bengio\/","title":{"rendered":"Interview with Yoshua Bengio, Pioneer of AI"},"content":{"rendered":"<div id=\"attachment_1296\" style=\"width: 252px\" class=\"wp-caption alignright\"><a href=\"http:\/\/blogs.harvard.edu\/toshietakahashi\/files\/2019\/06\/IMG_4853.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1296\" class=\"size-medium wp-image-1296\" src=\"http:\/\/blogs.harvard.edu\/toshietakahashi\/files\/2019\/06\/IMG_4853-242x300.jpg\" alt=\"\" width=\"242\" height=\"300\" srcset=\"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/files\/2019\/06\/IMG_4853-242x300.jpg 242w, https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/files\/2019\/06\/IMG_4853-768x952.jpg 768w, https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/files\/2019\/06\/IMG_4853-826x1024.jpg 826w, https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/files\/2019\/06\/IMG_4853.jpg 1627w\" sizes=\"auto, (max-width: 242px) 100vw, 242px\" \/><\/a><p id=\"caption-attachment-1296\" class=\"wp-caption-text\">Prof. Yoshua Bengio at the MILA<\/p><\/div>\n<p class=\"Transcriptbody\" align=\"left\"><span lang=\"EN-US\">On June 7<sup>th<\/sup>, 2019 at the <a href=\"https:\/\/mila.quebec\">MILA (Montreal Institute for Learning Algorithms)<\/a> in Montreal, Canada, I conducted my interview with Professor Yoshua Bengio, who is one of the pioneers of AI (Artificial Intelligence).\u00a0 He is well-known as the \u201cfather of AI\u201d for his great contribution to developing so-called deep learning.\u00a0 He has received the <a href=\"https:\/\/awards.acm.org\/about\/2018-turing\">2018 ACM A.M. Turing Award <\/a>with Geoffrey Hinton and Yann LeCun for major breakthroughs in AI.<\/span><\/p>\n<p class=\"Transcriptbody\" align=\"left\"><span lang=\"EN-US\">In my interview, I asked him about the possibilities of AGI (Artificial General Intelligence), biased data, people\u2019s concerns about GAFA (Google, Amazon, Facebook, Apple) and China, the opportunities and risks of AI and the future of AI.\u00a0 All these questions are based on my previous experiences in the University of Cambridge as well as many international summits and conferences on AI I have been invited to recently.\u00a0 <\/span><\/p>\n<p class=\"Transcriptbody\" align=\"left\"><span lang=\"EN-US\">Bengio is also noteworthy because he chooses to remain as an academic, staying at <a href=\"https:\/\/www.umontreal.ca\/en\/\">the University of Montreal<\/a> as head of the MILA, while other AI leaders such as Geoffrey Hinton have left academia and now work for Google.\u00a0 Bengio continues to contribute to teaching students as well as engaging with local communities. He believes the education of future generations and people\u2019s engagement with AI is crucial for the creation of a better society including AI.\u00a0This is because he is aware of not only the opportunities but also the risks of AI. \u00a0As he owns his startup, so-called <a href=\"https:\/\/www.elementai.com\">Element AI<\/a>, he is instrumental in building a bridge between academia and the business world. \u00a0<\/span><\/p>\n<p class=\"Transcriptbody\" align=\"left\"><span lang=\"EN-US\">This is my interview with Yoshua.<\/span><\/p>\n<h3 class=\"Transcriptbody\"><b><span lang=\"EN-US\">The Road to AGI <\/span><\/b><\/h3>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">Yoshua Bengio\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Did you have some questions for me?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">Toshie Takahashi\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, of course.\u00a0 Thank you for taking the time.\u00a0 I&#8217;d like to ask you about AGI.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Okay.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I watched some of your videos and I understand you are very positive about AGI.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 No.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">TT \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 No? I thought you showed a&#8230;<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I&#8217;m positive that we can build machines as intelligent as humans, but completely general intelligence is a different story. I&#8217;m not positive as to how humans might use it because we&#8217;re not very wise. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">TT \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Okay. So can you show a road map of how you could create AGI?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, the one I have chosen to explore.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">TT\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I spent some time in Cambridge, and some scholars, for example Professor John Daugman, the head of Artificial Intelligence Group at the University of Cambridge, said that AGI is an illusion created by science fiction because he said that we don&#8217;t even understand a single neuron so how can we create AGI?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, I disagree with him.<\/span><span lang=\"EN-US\">\u00a0<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">TT \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Okay, so could you tell me about that?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Sure. Having worked for decades on AI and machine learning I feel strongly that we have made very substantial progress, and in particular we have uncovered some principles, which today allow us to build very powerful systems. I also recognise that there&#8217;s a long way towards human level AI, and I don&#8217;t know how long it&#8217;s going to take. So I didn&#8217;t say we&#8217;ll find human level AI in five years, or ten years or 50 years. I don&#8217;t know how much time it&#8217;s going to take, but the human brain is a machine. It&#8217;s a very complex one and we don\u2019t fully understand it, but there\u2019s no reason to believe that we won\u2019t be able to figure out those principles. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I see. Mr. Tom Everitt at DeepMind said that he could create AGI in a couple of decades, maybe 20 or 30 years.\u00a0 Not too far in the future.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 How does he know?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I don&#8217;t know. I asked him but he didn&#8217;t answer it.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Nobody knows.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Nobody knows. Yes, of course. When I met Professor Sheldon Lee Glashow, a Nobel Prize winning American theoretical physicist, he told us that we won\u2019t have AGI.\u00a0Or even if we have it, it\u2019d be very far away.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Possibly, so we don&#8217;t know. It could be ten years, it could be 100 years. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Oh really?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Okay.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 It&#8217;s impossible to know these things. There\u2019s a beautiful analogy that I heard my friend Yann LeCun mention first. As a researcher, our progress is like climbing a mountain and as we approach the peak of that mountain we realise there&#8217;s some other mountains behind.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, exactly.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 And we don\u2019t know what other higher peak is hidden from our view right now. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I see.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 So it might be that the obstacles that we&#8217;re currently working on are going to be the last ones to reach human level AI or maybe there will be ten more big challenges that we don\u2019t even perceive right now, so I don&#8217;t think it&#8217;s plausible that we could really know when, how many years, how many decades, it will take to reach human level AI.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I see.\u00a0 But some people also say that we need a different breakthrough to create AGI. We need a kind of paradigm shift from our current approach. \u00a0Do you think that you can see the road to reach if you keep on with deep learning? So this is a right road?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 We have understood as I said some very important principles through our work on deep learning and I believe those principles are here to stay, but we need additional advances that are going to be combined with things we have already figured out. I think deep learning is here to stay, but as is, it\u2019s obviously not sufficient to do, for example, higher-level cognition that humans are doing. We&#8217;ve made a lot of progress on what psychologists call System 1 cognition, which is everything to do with intuitive tasks. Here is an example of what we&#8217;ve discovered, in fact one of the central ideas in deep learning: the notion of distributed representation. I\u2019m very, very sure that this notion will stay because it&#8217;s so powerful. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Wonderful! I&#8217;m happy to hear that.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes.<\/span><br \/>\n<!--nextpage--><\/p>\n<h3 class=\"Transcriptbody\"><b><span lang=\"EN-US\">The Future of AI <\/span><\/b><\/h3>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 You are really ahead of this&#8230;So what can you see of the future?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I don\u2019t see the future.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Really? I thought that you could see a future we cannot see yet.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 No, I have research goals and I have chosen to explore particular directions because I believe in them, so the vision I have is that a big missing piece in our current machine learning systems is what people call common sense. So think about the intelligence of a two-year-old or the intelligence of a cat, we don&#8217;t even have that in machines right now. And that&#8217;s not even starting to think about things like language where we&#8217;re seriously lacking, especially when you look at the mistakes made by current state-of-the-art systems. That sort of common sense understanding includes things that current machine learning doesn&#8217;t do, like understanding cause and effect, and discovering cause and effect. It includes a broad understanding of the world, not just one specialised task. It includes the ability to discover this model of the world through unsupervised exploration. We rely today heavily on supervised learning where all of the high-level concepts have been defined by a human teacher or human labels. There are lots of aspects of intelligence that are currently at the frontier, and I&#8217;m not the only one exploring these things, which could make a big difference in a few years, but it&#8217;s hard to be sure.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Is your goal to create a human or a superhuman?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 No, my goal is to understand general principles of intelligence, how an agent can become intelligent. I and many others would like to discover the equivalent of the laws of physics, but for intelligence.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 And presumably those principles would apply to humans, to animals, to aliens, who might be intelligent, to machines that we can build, so these would be very general principles and machine learning has already established some of those principles, but we&#8217;re still missing some important ones, I believe. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 But you&#8217;ve found a simple principle?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, several. But behind this, there&#8217;s a meta-principle, a scientific hypothesis, that intelligence could be explained by a few simple principles. We don&#8217;t know if that hypothesis is true, but the success of deep learning in the last few years is a good validation of that hypothesis. It&#8217;s consistent with that hypothesis because deep learning is built on a few very simple principles. Most of the complexity of the systems that are trained with deep learning is not in the learning mechanismsp; it&#8217;s in the data. The data contains the overwhelming share of the information in a current trained AI system, while only a little bit of information, relatively speaking, is in those principles, which are like the learning procedures.<\/span><span lang=\"EN-US\">\u00a0<\/span><!--nextpage--><\/p>\n<h3 class=\"Transcriptbody\"><b><span lang=\"EN-US\">Biased Data and AI for Humanity<\/span><\/b><\/h3>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 You said that the intelligence is from knowledge and knowledge is acquired from data?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 That&#8217;s right.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 But if data is biased, what happens?\u00a0Some social scientists criticize most data for being based on male, Caucasian middle aged&#8230; <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 That\u2019s right. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 So if a young woman of colour applies, for instance, for an insurance policy, AI might say no because they don\u2019t have enough data for those applicants.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, absolutely. I think there are technical solutions and social solutions to this problem. We have to change our social norms, for example, so that companies building products use technological solutions and logistical solutions, for example, in the way that the data is collected, in the way that it&#8217;s described and managed, and in the particular learning algorithms that are used because we know techniques that can mitigate the bias and discrimination. So we can probably include those techniques, but more importantly we need to make sure that companies and governments use them. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Is that why you think it&#8217;s important that both social scientists and natural scientists work for AI together?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I love the idea of \u201cAI for humanity\u201d as you have in the Mila here.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Right, because the AI researcher might not realise some of the social issues that could be involved in the deployment. I think it&#8217;s particularly important for people who are doing research or development of products that is close to something that people will use, in large-scale deployment for example.<\/span><br \/>\n<!--nextpage--><\/p>\n<h3 class=\"Transcriptbody\"><b><span lang=\"EN-US\">People\u2019s Concerns about GAFA and China<\/span><\/b><\/h3>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 People are concerned about big companies such as Google or Apple, or a country like China because they have a huge amount of data so they can do what they want.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, but they probably also want to be considered as positive, responsible agents in society, and so if we do the right communication, to explain the issues, and engage in social discussion about these things I&#8217;m optimistic that we can have those social norms be improved. And of course, it means changing how to do things.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I met Dan Klein, chief data officer from Valtech in Cambridge, UK. He is also concerned about China.\u00a0 Because China has a huge amount of data which computer scientists can use in order to develop the AI, while UK and EU have the limited data access because of the Data Protection Act.\u00a0 Also Chinese companies pay high salaries for computer engineers outside of China, so great European engineers are going to leave for China.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I don&#8217;t think our European engineers are going to China.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Oh really? Maybe American?\u00a0 Oh, I don&#8217;t know.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Not much. No but they don&#8217;t need that. They have plenty of good scientists and engineers. The issue with China is that it&#8217;s difficult for many of us to have confidence that the current political system of China will behave responsibly, but it&#8217;s true of many countries that governments are not very responsible. If you consider, for example, climate change, the US has been behaving very badly. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 So even if it&#8217;s a democracy it doesn&#8217;t mean that governments will be doing the right things. So I think every country has an interest in being part of the global consensus for obvious economic reasons, but also, to feel good about themselves. So I think we should not pit countries against each other and peoples against each other.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes.\u00a0 I totally agree with you.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 It\u2019s not going to help.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 If you develop an algorithm in Canada, would it also work in Japan or should we adapt it to our society?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 No the algorithms are very generic. It\u2019s like math. \u00a0Addition is the same in Japan as in Canada.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, but data itself has some cultural meanings\u2026<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 No, but that&#8217;s data. That&#8217;s not the algorithm. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Okay. I see. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 So the learning procedures are going to be the same, but the data will be different, and the systems that are trained using the learning procedures and the data, of course, will be different in different countries.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 So once we have the algorithm, we can use it with our own data.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 That\u2019s right.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Then it will work very well. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 It works better.<\/span><br \/>\n<!--nextpage--><\/p>\n<h3 class=\"Transcriptbody\"><b><span lang=\"EN-US\">The Opportunities and Risks of AI<\/span><\/b><\/h3>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Okay, wonderful. You might be getting tired of talking about the opportunities and risks of AI because people always like to ask you as one of the leading experts on AI.\u00a0 But I also think it\u2019s very important for us to understand both.\u00a0 Then we can maximize opportunities and minimize risks in order to get social benefit from AI.\u00a0 So what are the biggest opportunities and the biggest risks? I know, there are many, many risks, but from your point of view, what are you concerned about the most?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 So with AI in terms of opportunities, I think they&#8217;re a huge potential for social good, in healthcare, in the environment, fighting climate change, which is a very important question for the planet. Maybe a little bit further down the road in education as well. And on the risk side, I think the biggest risk really is a threat to democracy, a threat to the stability of our social fabric because of things like killer drones, because of things like political advertising and the influence that one can buy on social networks, because of things like concentration of power, in a few hands, in a few people, a few companies, a few countries, and because of potential social unrest that could come from rapid automation. So all of these could be disruptive to society and we have to be careful where we cross the red line between what is acceptable and what is not acceptable in the applications of AI.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Who determines the red line?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 It&#8217;s a very good question. Humans define their social norms through a global discussion, and in different countries it might be different types of people. Scholars usually have more impact on the result, and scientists, I think, should be part of the discussion, but regular citizens should be part of the discussion as well. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, I agree.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 At the end of the day, in decent democratic countries it&#8217;s going to be democratic decisions where we put those lines. Where I think it&#8217;s trickier is that many of these decisions cannot be taken in isolation in each country. There has to be global international coordination.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, definitely. I met Mr. Irakli Beridze, the head of Centre for Artificial intelligence and Robots, the United Nations Interregional Crime and Justice Research Institute (UNICRI), and he said that he goes to Russia, Syria and other countries because the governments of those countries also have to cooperate. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Right. Yes. It&#8217;s very important.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 But it\u2019d be very difficult. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes, unfortunately, we don&#8217;t have a good international coordination framework. The UN is very weak.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Oh really?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Oh yes. It doesn&#8217;t have any power. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Really? I thought it had power. No?<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 No, the UN doesn&#8217;t have nearly enough power. One issue I&#8217;m a little bit more familiar with is the killer robots and lethal autonomist weapons. The Secretary General has been saying for a while now, this is both morally repugnant and dangerous for global security, but the problem is that a lot of the decisions in UN decision-making committees and treaties happen by complete consensus. So if just one country in the committee says no, there&#8217;s no treaty. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 That cannot work. The problem is that individual countries have been too scared of losing some sovereignty, some power, to a higher level, which would be, for example, international government, but we have to do that otherwise we will not solve the climate change problem. We will not solve fiscal issues across the planet. We will not prevent dangers from misuse of AI. So there are lots of issues for which we have to have global coordination. <\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">TT \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Yes. <\/span><span lang=\"EN-US\">I also believe we need more discussions about living with AI both locally and globally. \u00a0<\/span><span lang=\"EN-US\">Thank you very much.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><span lang=\"EN-US\">YB \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 You&#8217;re welcome.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><\/p>\n<p class=\"Transcriptbody\"><b><span lang=\"EN-US\">Acknowledgement<\/span><\/b><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">I would like to express my gratitude to Myriam C\u00f4t\u00e9, director of AI for Humanity at the Mila for her kind invitation and great support on my cross-cultural research on AI for good. \u00a0This research is supported by the KDDI Foundation.<\/span><\/p>\n<p class=\"Transcriptbody\"><span lang=\"EN-US\">\u00a0<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>On June 7th, 2019 at the MILA (Montreal Institute for Learning Algorithms) in Montreal, Canada, I conducted my interview with Professor Yoshua Bengio, who is one of the pioneers of AI (Artificial Intelligence).\u00a0 He is well-known as the \u201cfather of &hellip; <a href=\"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/2019\/06\/17\/interview-with-yoshua-bengio\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2464,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[117681,13076,117682,117676,117657,117678,134,117677,3022,23092,117668],"tags":[],"class_list":["post-1291","post","type-post","status-publish","format-standard","hentry","category-agi","category-ai","category-biased-data","category-cross-disciplinary-approach","category-culture-and-communication","category-deep-learning","category-education","category-future-society","category-opportunities","category-risks","category-robot"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/wp-json\/wp\/v2\/posts\/1291","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/wp-json\/wp\/v2\/users\/2464"}],"replies":[{"embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/wp-json\/wp\/v2\/comments?post=1291"}],"version-history":[{"count":14,"href":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/wp-json\/wp\/v2\/posts\/1291\/revisions"}],"predecessor-version":[{"id":1306,"href":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/wp-json\/wp\/v2\/posts\/1291\/revisions\/1306"}],"wp:attachment":[{"href":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/wp-json\/wp\/v2\/media?parent=1291"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/wp-json\/wp\/v2\/categories?post=1291"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/toshietakahashi\/wp-json\/wp\/v2\/tags?post=1291"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}