‘AI godfather’ Yoshua Bengio warns that the AI race prioritizes speed over safetyThis risks unpredictable and dangerous consequencesHe urges global cooperation to enforce AI regulations before autonomous systems become difficult to control
‘AI godfather’ Yoshua Bengio helped create the foundations of the neural networks running all kinds of AI tools today, from chatbots mimicking cartoon characters to scientific research assistants. Now, he has an urgent warning for AI developers, as he explained in a Sky News interview. The race to develop ever-more-powerful AI systems is escalating at a pace that, in his view, is far too reckless.
And it’s not just about which company builds the best chatbot or who gets the most funding. Bengio believes that the rapid, unregulated push toward advanced AI could have catastrophic consequences if safety isn’t treated as a top priority.
Bengio described watching developers racing against each other, getting sloppy, or taking dangerous shortcuts. Though speed can make the difference in breaking ground on a new kind of product worth billions and playing catch-up to a rival, it may not be worth it to society.
That pressure has only intensified for AI developers with the rise of Chinese AI firms like DeepSeek, whose advanced chatbot capabilities have caught the attention of Western companies and governments alike. Instead of slowing down and carefully considering the risks, major tech firms are accelerating their AI development in an all-out sprint for superiority. Bengio worries this will lead to rushed deployments, inadequate safety measures, and systems that behave in ways we don’t yet fully understand.
Bengio explained that he has been warning about the need for stronger AI oversight, but recent events have made his message feel even more urgent. The current moment is a “turning point,” where we either implement meaningful regulations and safety protocols or risk letting AI development spiral into something unpredictable.
After all, more and more AI systems don’t just process information but can make autonomous decisions. These AI agents are capable of acting on their own rather than simply responding to user inputs. They’re exactly what Bengio sees as the most dangerous path forward. With enough computing power, an AI that can strategize, adapt, and take independent actions could quickly become difficult to control should humans want to take back the reins.
AI takeover
The problem isn’t just theoretical. Already, AI models are making financial trades, managing logistics, and even writing and deploying software with minimal human oversight. Bengio warns that we’re only a few steps away from much more complex, potentially unpredictable AI behavior. If a system like this is deployed without strict safeguards, the consequences could range from annoying hiccups in service to full-on security and economic crises.
Bengio isn’t calling for a halt to AI development. He made clear that he’s an optimist about AI’s abilities when used responsibly for things like medical and environmental research. He just sees a need for a priority shift to more thoughtful and deliberate work on AI technology. His unique perspective may carry some weight when he calls for AI developers to put ethics and safety ahead of competing with rival companies. That’s why he participates in policy discussions at events like the upcoming International AI Safety Summit in Paris,
He also thinks regulation needs to be bolstered by companies willing to take responsibility for their systems. They need to invest as much in safety research as they do in performance improvements, he claims, though that balance is hard to imagine appearing in today’s AI melee. In an industry where speed equals dominance, no company wants to be the first to hit the brakes.
The global cooperation Bengio pitches might not appear immediately, but as the AI arms race continues, warnings from Bengio and others in similar positions of prestige grow more urgent. He hopes the industry will recognize the risks now rather than when a crisis forces the matter. The question is whether the world is ready to listen before it’s too late.
You might also like…
I discovered a surprising difference between DeepSeek and ChatGPT search capabilitiesWill OpenAI sharing future AI models early with the government improve AI safety, or just let it write the rules?OpenAI co-founder’s new company promises ‘Safe Superintelligence’ – a laughably impossible dream