The Opportunities and Risks of AI
TT Okay, wonderful. You might be getting tired of talking about the opportunities and risks of AI because people always like to ask you as one of the leading experts on AI. But I also think it’s very important for us to understand both. Then we can maximize opportunities and minimize risks in order to get social benefit from AI. So what are the biggest opportunities and the biggest risks? I know, there are many, many risks, but from your point of view, what are you concerned about the most?
YB So with AI in terms of opportunities, I think they’re a huge potential for social good, in healthcare, in the environment, fighting climate change, which is a very important question for the planet. Maybe a little bit further down the road in education as well. And on the risk side, I think the biggest risk really is a threat to democracy, a threat to the stability of our social fabric because of things like killer drones, because of things like political advertising and the influence that one can buy on social networks, because of things like concentration of power, in a few hands, in a few people, a few companies, a few countries, and because of potential social unrest that could come from rapid automation. So all of these could be disruptive to society and we have to be careful where we cross the red line between what is acceptable and what is not acceptable in the applications of AI.
TT Who determines the red line?
YB It’s a very good question. Humans define their social norms through a global discussion, and in different countries it might be different types of people. Scholars usually have more impact on the result, and scientists, I think, should be part of the discussion, but regular citizens should be part of the discussion as well.
TT Yes, I agree.
YB At the end of the day, in decent democratic countries it’s going to be democratic decisions where we put those lines. Where I think it’s trickier is that many of these decisions cannot be taken in isolation in each country. There has to be global international coordination.
TT Yes, definitely. I met Mr. Irakli Beridze, the head of Centre for Artificial intelligence and Robots, the United Nations Interregional Crime and Justice Research Institute (UNICRI), and he said that he goes to Russia, Syria and other countries because the governments of those countries also have to cooperate.
YB Right. Yes. It’s very important.
TT But it’d be very difficult.
YB Yes, unfortunately, we don’t have a good international coordination framework. The UN is very weak.
TT Oh really?
YB Oh yes. It doesn’t have any power.
TT Really? I thought it had power. No?
YB No, the UN doesn’t have nearly enough power. One issue I’m a little bit more familiar with is the killer robots and lethal autonomist weapons. The Secretary General has been saying for a while now, this is both morally repugnant and dangerous for global security, but the problem is that a lot of the decisions in UN decision-making committees and treaties happen by complete consensus. So if just one country in the committee says no, there’s no treaty.
YB That cannot work. The problem is that individual countries have been too scared of losing some sovereignty, some power, to a higher level, which would be, for example, international government, but we have to do that otherwise we will not solve the climate change problem. We will not solve fiscal issues across the planet. We will not prevent dangers from misuse of AI. So there are lots of issues for which we have to have global coordination.
TT Yes. I also believe we need more discussions about living with AI both locally and globally. Thank you very much.
YB You’re welcome.
I would like to express my gratitude to Myriam Côté, director of AI for Humanity at the Mila for her kind invitation and great support on my cross-cultural research on AI for good. This research is supported by the KDDI Foundation.