Urgent research into the risks posed by artificial intelligence is needed “now, not in 10 years’ time”, Google DeepMind chief executive Demis Hassabis has warned, calling on governments and tech companies to boost investment in AI safety as systems become more powerful and autonomous. Speaking at the AI Impact Summit in Delhi, the Google AI boss said the world had a “narrow window” to understand and mitigate the most serious threats before existing institutions are overwhelmed.
Sir Demis, whose London-based DeepMind lab sits at the heart of Google’s AI push, said the most pressing dangers fall into two broad categories: misuse by “bad actors” and the risk of losing control as AI systems gain autonomy.
● On misuse, he warned that tools built to accelerate medical research or automate software development could be repurposed for cyber‑attacks, disinformation campaigns or even the design of biological weapons.
● On technical risk, he pointed to increasingly “agent‑like” systems that can plan and act with less human oversight, saying this could see AI doing “things we didn’t intend when we designed them”.
“These systems are getting more capable all the time, and as they become more autonomous, they become both more useful and more dangerous,” he said in an interview on the sidelines of the Delhi summit. He added that keeping safeguards ahead of the pace of innovation is “the hardest part” for regulators and developers alike.
Hassabis told BBC News that research on the threats of AI “needs to be done urgently” and at a far greater scale than today’s fragmented efforts. While huge sums are flowing into building ever-larger models and AI-powered products, he argued that funding and talent for safety science, evaluation and oversight is still lagging.
He has urged governments to back dedicated AI safety institutes capable of independently testing cutting‑edge models for risks ranging from jailbreaks and deception to cyber‑offence and bio‑security. The UK has already set up an AI Safety Institute, which is now expected to gain a formal legal mandate, while the US and several other countries are building similar facilities.
“We need strong safeguards, rigorous testing and a much better understanding of how these systems behave before we deploy them widely,” Hassabis said, adding that voluntary industry commitments would not be enough on their own. He compared the task ahead to nuclear safety and global health security, arguing that advanced AI now “belongs in that category of risks we have to treat at a societal level”.
The Delhi gathering, billed as the AI Impact Summit, has drawn representatives from more than 100 countries alongside the heads of major tech firms. Many leaders, including Hassabis, have pushed for some form of coordinated international oversight of the most powerful AI models, similar to frameworks used in areas such as civil aviation or nuclear materials.
However, the United States delegation signalled firm opposition to any notion of “global governance” of AI, arguing that innovation should not be handed to “international bureaucracies and centralized control”. Washington has instead favoured a patchwork of national rules, voluntary safety pledges from companies and sector‑specific guardrails.
The split highlights a growing fault line: while the European Union’s AI Act imposes binding obligations on high‑risk systems and sets special rules for general‑purpose models, the US has leaned on executive orders and state‑level initiatives, and countries such as India are still shaping their approach. Analysts say this divergence risks creating a “compliance splinternet” in which the same AI product faces very different expectations in different markets.
Hassabis said the US and wider West remain “slightly ahead” of China in the race to build the most capable AI systems, but warned this lead could vanish “within months”. That, he argued, makes it even more important to establish minimum global safety standards now, before competition for AI dominance undercuts caution.
China has already introduced strict controls on generative AI content and requires security reviews for some large models, while the EU’s new regime bans practices such as social scoring and some forms of biometric surveillance outright. Meanwhile, countries from Japan to the UK have moved in 2025 to harden their own rules, shifting from soft “guidance” towards binding regulation for high‑risk use cases like hiring, credit scoring and critical infrastructure.
Hassabis said he felt a personal responsibility to “be bold and responsible” as his teams push the frontier, but stressed that DeepMind is only “one entity in the ecosystem” and cannot slow progress alone. “We don’t always get it right,” he said, “but we try to be more accurate than most,” adding that meaningful safety would require coordination between rival tech firms, regulators and academic labs.
Despite his warnings, the DeepMind boss remains optimistic about AI’s potential to transform science, productivity and creativity if handled carefully. His own work on AlphaFold, which used AI to predict the structure of proteins and won him the 2024 Nobel Prize in Chemistry, is credited with unlocking new avenues in drug discovery and biology.
Hassabis argued that science and technology education will be “more important than ever” over the next decade, even as AI tools become better at writing code and automating technical tasks. He believes AI will widen access to software creation and digital entrepreneurship, shifting the edge from purely technical skills towards “taste, creativity and judgment”.
For now, though, his central message is that safety must keep pace with capability. “AI could be one of the most beneficial technologies in history,” he said, “but only if we invest as seriously in understanding its risks as we do in building it.”
Comments