Double-edged sword
Q: In the first International AI Safety Report, you and other experts warn that AI systems are increasingly becoming increasingly autonomous, capable of planning and acting in pursuit of a goal. What are the risks associated with this development?
Prof. Bengio: We are gradually moving towards machines that are not only more intelligent but also more autonomous. Today, AI systems already surpass humans in certain domains. They can master hundreds of languages and pass PhD-level exams across multiple disciplines. However, they also have significant limitations, including poor long-term planning capabilities.
Companies are investing billions to try to solve this problem. They develop AI agents capable of autonomous decision-making over extended periods, which could significantly impact the job market by replacing human roles.
Beyond economic concerns, there is an even more critical issue: loss of human control. In controlled experiments, some AI systems have even engaged in deceptive behavior to prevent being shut down, showing a form of self-preservation. This is alarming because we don’t want machines that will compete with us.
There is not much danger in that sense right now because they are not smart enough, and the engineers can manage these risks. However, in a few years, they might be sufficiently smarter and we need to start paying attention before it is too late.
Q: There are also concerns about the potential risks to privacy and civil liberties. What role do AI researchers and policymakers play in ensuring that AI will not infringe on human rights?
Prof. Bengio: Yes, the problem is people who want to use AI for their power. In societies where political opposition exists, if one of the parties can use AI for disinformation or surveillance, this can reinforce and centralize power in a way that goes against the population’s well-being. Therefore, we have to be very careful about the concentration of this power.
The way we manage these AIs in the future will be central to preventing this scenario. We need to make sure that no single person, no single corporation, and no single government can have total power over super intelligent AI.
Q: When we talk about AI, a debate has emerged regarding DeepSeek’s latest rise. Described as a “Black Swan” event, it is expected to significantly impact the global AI landscape. How do you see its potential positive implications and risks for the AI-driven technological economy?
Prof. Bengio: These advances are accelerating, and DeepSeek is showing that they can be made cheaper. It has significant accessibility advantages but poses serious risks.
My concern is that if open-weight AI models, like DeepSeek, are distributed completely, terrorists may exploit them for disinformation campaigns, cyberattacks, or even bioweapon development. This is a double-edged sword because while these systems become more available, cheaper, and more powerful, they also lower the barrier to misuse.
DeepSeek’s rise also could push the competition between AI superpowers. The danger here is that in their race to outpace each other, safety issues might be overlooked. We can be all the victims of this race if we are not careful enough.
Q: Intensifying international and corporate competition in AI has significant environmental impacts. What balance should be struck between economic objectives and the imperative for ethical and environmentally conscious AI development?
Prof. Bengio: Let’s take a closer look at the energy issue. AI companies anticipate massive profits, so they are willing to shoulder high energy costs. But as AI systems grow more advanced, their energy consumption rises exponentially, placing immense strain on global energy supplies.
This surge in demand will inevitably drive up energy prices across the board, including electricity, oil, and other resources, affecting not just tech firms but households and industries worldwide. Worse yet, with renewable energy sources unable to keep pace, the reliance on fossil fuels will likely increase, exacerbating environmental harm.
This is a case where if we allow the market forces and the competition between countries to run its course, there is a sense in which we all lose. It needs to be managed and that is why government intervention is crucial. Policymakers must negotiate together to establish agreements that cap energy consumption at sustainable levels. Otherwise, the forces of competition between companies will only accelerate AI expansion in ways that are not just unsustainable but potentially dangerous.
Bridging the AI divide
Q: With all the existential and environmental risks you have mentioned, how can we ensure that AI systems, particularly self-learning or autonomous ones, align with ethical principles that reflect human values, rather than being used as tools for malicious purposes?
Prof. Bengio: There are a few things that are quite clear. First, we need more transparency in managing AI risks. Currently, there is essentially no regulatory framework almost anywhere in the countries where these systems are being developed. I think the governments have a responsibility to at least require a kind of reporting to them.
Responsibility is another key aspect. In many countries, legal principles hold companies accountable for products that cause harm. However, when it comes to software, liability remains a grey area. Clarifying liability laws would be a simple but effective step. If companies knew they could face lawsuits for negligence, they would have stronger incentives to manage risks properly. This approach could not require governments to dictate technical solutions, they just need to ensure accountability through legislation.
Each country should establish its oversight mechanisms, but we also need international coordination. A network of AI safety institutes is emerging, which can harmonize best practices across countries. Eventually, we should aim for global agreements and treaties, similar to how we handle other scientific and technological risks.
Q: As AI systems grow more powerful, how do these advancements benefit all of humanity, rather than creating new divides in terms of wealth, job displacement, or political power?
Prof. Bengio: Currently, control over AI technology is concentrated in a small number of corporations and nations. To prevent AI-driven economic imbalances, global institutions, such as the UN (United Nations), should advocate for mechanisms that distribute AI-generated wealth more equitably.
Take Vietnam, for instance, a country with a strong industrial sector. If widespread automation shifts production to AI-powered facilities in wealthier nations like the US, it could lead to significant job losses and economic hardship in countries dependent on manufacturing exports.
At some point, there will need global negotiations – a form of exchange. Countries developing advanced AI might ask other countries to refrain from creating similar, potentially dangerous AI. In return, the wealth generated by these AI systems, like new technologies and medical advancements, should be shared globally. Of course, we are very far from this, but we need to start those discussions at a global level.
Q: AI is now an inevitable part of our future. What should be the top priority in preparing the workforce to ensure that future generations work alongside AI rather than be replaced?
Prof. Bengio: It depends on the choices governments make in the coming years. If we leave everything to market forces, companies will continue automating tasks once handled by humans, leading to significant job displacement. The timeline for this shift is uncertain, but changes could happen rapidly. Perhaps, we see fairly radical transformations in the next five to 10 years.
Now, what can individuals do? Expanding digital and AI education is essential, but it will not be a universal solution. Not everyone can become an AI engineer. In fact, even programming itself may eventually be replaced by AI. The jobs most likely to remain will be those that require human components. Nurses, therapists, and managers – these professions rely on emotional intelligence and human relationships, making them harder to replace with AI.
Beyond individual adaptation, there is a larger question: Can we make deliberate choices about how AI is deployed? Again, this is something that has to be done globally, which is very challenging. We should do it in a way that does not create radical disruptions in the social fabric.
A chance to unite
Q: As a 2024 VinFuture Grand Prize Laureate, you have witnessed firsthand the growing interest and investment in AI, even in emerging economies. Given this landscape, how can the Global South effectively participate in and contribute to the global AI innovation movement?
Prof. Bengio: I think a big prize like the VinFuture Prize can make leading scientists far more aware of what is happening in Vietnam and other developing countries. Countries with strong talent pools including Vietnam, India, and Brazil already have well-established universities and a wealth of expertise in computing and AI. By forming strategic partnerships with nations having greater financial resources, like Canada or Europe, it is possible to build critical mass projects capable of competing on a global scale with major players like the US and China. If done correctly, such collaborations could drive innovation and act as a stabilizing force, ensuring that AI’s benefits are shared more equitably across the world.
Q: How significant is VinFuture Prize’s role in fostering collaboration between academia and industry to maximize AI’s impact?
Prof. Bengio: It will play a crucial role in bridging the gap between academic research and real-world AI applications. By recognizing and supporting breakthrough innovations, it encourages deeper collaboration between scientists, industry leaders, and policymakers, as well as fosters global dialogue on responsible AI. Engaging with governments and big organizations like VinFuture can help avoid reckless competition and steer it toward outcomes that benefit humanity.
However, these conversations will not emerge organically. They require a concerted effort from individuals and institutions who recognize the existential risks, like catastrophic malicious use. If we do nothing, we risk racing headlong into a dangerous future. But perhaps this is also an opportunity for all countries to unite in a way that has never happened before.