by Sakshi Dhingra - 15 hours ago - 3 min read
Demis Hassabis, chief executive of Google DeepMind, has urged the global AI community to accelerate research into advanced AI threats, warning that safety and risk-mitigation efforts are struggling to keep pace with rapidly improving systems.
Speaking on the sidelines of the India AI Impact Summit in New Delhi, Hassabis highlighted two primary risk categories that are becoming more pressing as frontier models grow more capable: malicious misuse by “bad actors” and technical control risks associated with increasingly autonomous, agent-like AI systems.
Hassabis singled out biosecurity and cybersecurity as areas where urgent work is needed, cautioning that today’s systems are already “pretty good” at cyber-related tasks. His central warning: as offensive capabilities rise, societies must ensure defences stay stronger than offences, or risk a widening security imbalance that favors attackers.
The concern extends beyond classic hacking. Safety researchers and policymakers increasingly worry that AI could scale up social engineering, accelerate vulnerability discovery, and reduce the expertise barrier for sophisticated attacks—making prevention, monitoring, and rapid response essential.
A major theme of Hassabis’ remarks is the transition into what he calls an agentic era, where AI systems act with more independence and perform multi-step tasks over time. He has suggested this shift could accelerate over the next one to two years, which would make governance, testing, and deployment controls an immediate priority rather than a future one.
Hassabis argues that autonomy is a double-edged sword: it increases usefulness, but also raises the risk that systems will do things their designers did not intend—especially when models interact with real-world tools, services, and workflows.
Hassabis also said the world may see artificial general intelligence within five to eight years, a forecast that, if even partially correct, shrinks the timeframe for developing robust technical safeguards, evaluation standards, and cross-border rules for high-capability systems.
At the same time, he pushed back against the idea that current systems have already reached AGI, pointing to core gaps such as limited continual learning, weak long-horizon planning, and inconsistent reliability.
The safety message lands amid sharp disagreement over global coordination. While Hassabis has emphasized the need for international cooperation and at least minimum standards for AI deployment because digital systems cross borders, the US delegation delivered a starkly different position.
Michael Kratsios, Director of the White House Office of Science and Technology Policy, told summit attendees the United States “totally” rejects global governance of AI, arguing that centralized control would impede adoption and innovation.
Child safety and deepfakes became a major parallel thread at the summit. Reporting from Delhi cited UNICEF and Interpol research across 11 countries indicating at least 1.2 million children reported their images had been manipulated into sexually explicit deepfakes in the past year, evidence, leaders argued, that AI-enabled harms are already operating at scale.
Even as leaders argue over safety and governance, investment in the AI stack is accelerating. Coverage from the summit notes Google announced $15 billion in investments tied to datacentres and subsea cables, underscoring how rapidly compute and connectivity are scaling in key markets like India.
Bottom line: Hassabis’ warning reframes AI safety as an urgent research agenda grounded in near-term cyber and bio risks and the fast-arriving “agentic” wave—yet the summit also exposed how divided major powers remain on the question of global rules.