Who Governs AI When the World Is Divided
From Bletchley to India, AI summits are maturing. But US-China rivalry, ungoverned agentic systems, and structural coordination failures threaten to fracture the architecture before it sets.
By Cyrus Hodes, Founder, AI Safety Connect
AI governance is at an inflection point. The summits are multiplying, the declarations are accumulating, and the institutions are slowly taking shape. But the harder question is whether any of it is actually building toward something durable, or whether geopolitical rivalry is quietly hollowing it out before the architecture can set.
I have spent the past several years working across these processes, from OECD expert groups to the Global Partnership on AI to the India AI Impact Summit in February 2026. What follows is an honest assessment of where global AI governance stands: what is working, where the coordination is genuinely failing, and what the window for action looks like from where I sit.
The Summit that changed the frame
The India AI Impact Summit in February 2026 marked a structural shift in AI governance. For the first time, a major global AI summit was hosted in the Global South, and it reframed the conversation in ways that previous summits had not. Where Bletchley in 2023 focused on frontier risk, Seoul in 2024 expanded the conversation to include a broader set of stakeholders, and Paris in 2025 emphasized action-oriented commitments, India centered its agenda on three foundational pillars: People, Planet, and Progress. Development outcomes, public service delivery, and inclusive growth were placed at the core of the governance discussion rather than treated as secondary concerns.
This matters because AI governance has, until now, been shaped overwhelmingly by the countries building frontier systems. The assumptions embedded in those frameworks reflect specific institutional contexts, regulatory traditions, and risk profiles. India’s approach, articulated through its November 2025 AI Governance Guidelines, represents a distinctive path: prioritizing innovation through existing legal frameworks rather than standalone AI legislation, and positioning safety as an enabler of trust and scale rather than a constraint on development. The summit drew on eight regional AI conferences held across Indian states between October 2025 and January 2026, feeding region-specific use cases and policy gaps directly into the national agenda. That kind of bottom-up input is rare in international governance processes and it produced a richer, more grounded conversation as a result.
AI Safety Connect participated in the summit precisely because this shift matters for the broader governance trajectory.
“A governance architecture shaped by only a handful of nations will struggle to achieve legitimacy or effectiveness across the diverse contexts where AI is actually being deployed.’’
From inclusion to agenda-setting
The question is now less about whether Global South countries are included and more about whether they have genuine agenda-setting power. India demonstrated what that looks like in practice. The AI Compendium, released during the summit, documented real-world AI applications across priority sectors: health, energy, education, agriculture, gender empowerment, accessibility. These are practical case studies from countries where infrastructure varies widely, languages are diverse, and populations are vast.
The Strategic Foresight Group’s report “India’s AI Gambit: Navigating the Global Race,” launched in January 2026, made the national security case explicitly: if India depends on foreign suppliers for critical AI tools and safety testing, it faces leverage vulnerabilities in moments of crisis. Investing in domestic AI safety research, shared testbeds, and international coalitions gives India bargaining power. This is a sovereign strategic argument, not a request for inclusion.
India’s linguistic and cultural diversity, combined with its scale, also positions it to advance multilingual and multimodal AI systems in ways that monolingual frameworks cannot. When AI is being deployed for healthcare access in rural communities, financial inclusion for underbanked populations, or agricultural decision-support across varied climatic regions, the governance frameworks need to reflect those realities.
The implication for equity is straightforward: governance frameworks designed exclusively in Washington, Brussels, or Beijing will be incomplete. They will miss failure modes, deployment challenges, and safety requirements that only emerge in different institutional and social contexts. This is why the India summit’s emphasis on AI for public service delivery was not a soft addition to the governance conversation but rather a correction.
What middle powers can actually do
Middle powers lack hard power in frontier AI. They are not building the largest models, they do not control the compute supply chain, and they do not host the dominant frontier labs. But this framing mistakes the nature of the leverage available to them.
Middle powers have two underused instruments. The first is collective market and regulatory leverage. If multiple large and mid-sized economies align on baseline safety expectations for AI systems deployed in their markets, that shapes incentives for frontier developers regardless of where those developers are based. Procurement standards, market access conditions, and safety certification requirements become powerful tools when coordinated across a bloc of countries representing billions of users.
The second is coalition diplomacy.
“The risk we should be most concerned about is not that middle powers fail to build their own frontier systems. It is that geopolitical competition between the US and China fractures the governance landscape into incompatible blocs, leaving everyone else to pick sides rather than shape outcomes.’’
Middle powers can resist that dynamic by building coordinated positions that exert pressure on both superpowers to maintain shared safety baselines. This is not a passive role. It requires institutional capacity, technical literacy, and sustained diplomatic investment.
At the India summit, Dutch Prime Minister Dick Schoof made this argument in a special address at an AI Safety Connect event. His framing was direct: middle powers represent the largest part of the world economy and the strongest democratic traditions, and their strength lies in building coalitions. The Netherlands provides the lithography systems, Japan provides the precision technology, Germany supplies the mirrors, Taiwan supplies the production facilities, Korea makes the memory chips, and India offers the talent base. The AI governance architecture is being established now. The window for middle powers to shape it will not remain open indefinitely.
The fragmentation trap
US–China competition creates a gravitational pull toward fragmentation. Each summit risks becoming a venue where great-power positioning displaces substantive technical and governance work. The practical consequence is that important coordination mechanisms, including shared evaluation methodologies, incident reporting standards, and verification protocols, get stalled because they become entangled in broader strategic rivalries.
But the same competitive dynamic that threatens fragmentation also creates the strategic opening for other actors. When bilateral channels between Washington and Beijing are constrained, multilateral and Track 1.5 forums become essential infrastructure.
‘‘Neutral convening spaces that maintain credibility across geopolitical blocs can facilitate conversations that official diplomatic channels cannot. This is not a substitute for great-power agreement. It is the connective tissue that keeps coordination alive when official processes are gridlocked.’’
The open-source dimension adds further complexity. Chinese frontier labs, including Moonshot AI and DeepSeek, have released powerful open-weights models that rival proprietary systems. This accelerates diffusion of advanced capabilities beyond the control of any single jurisdiction, making coordination on safety standards more urgent while simultaneously making enforcement through traditional export controls harder.
The risk now is less about AI governance summits becoming useless under great-power rivalry and more about them being captured by it, reduced to declarations of principle that obscure the absence of practical coordination. The antidote is building governance mechanisms that function regardless of which geopolitical configuration prevails: shared evaluation tools, interoperable safety standards, and trusted channels for incident reporting that work across blocs rather than within them.
Where coordination is actually failing
Consider what happened with Grok’s image generation capabilities at the turn of 2025–2026. When users discovered the system could generate non-consensual intimate imagery, the response was fragmented. Different countries reacted according to their own timelines and legal frameworks. Partial fixes eventually appeared, but journalists who tested them found they did not work reliably. The entire episode took weeks to produce even inadequate results.
This was an easy case, with near-universal agreement that non-consensual intimate imagery is harmful and no legitimate use case to defend. Still, no coordinated response mechanism existed.
The coordination failures are structural, not incidental. Competitive pressures push labs to ship faster, even when internal teams advocate for caution. There is no credible way to verify claims about model capabilities, training procedures, or safety measures across labs, making mutual trust difficult to establish. Frontier labs also lack standardized mechanisms for reporting serious incidents, so problems remain siloed until public scandals force disclosure. And regulatory frameworks differ enough across jurisdictions that companies can exploit gaps.
These failures matter because the cases ahead will be harder. Existing models can already assist non-experts in designing biological agents. AI tools are being repurposed as semi-autonomous cyber attackers. These are documented capabilities, not speculative scenarios.
‘‘If we cannot coordinate effectively on cases where everyone agrees, the prospects for managing genuinely contested or ambiguous situations are poor. The infrastructure for collective response needs to be built before the next crisis, not improvised during it.’’
The governance gap nobody is talking about
The governance conversation is largely focused on the right problems such as bias, misuse, concentration of power, safety of frontier systems. But it is happening at the wrong layer. Current frameworks assume that humans initiate actions and AI systems execute them. That assumption is becoming outdated.
The rapid proliferation of agentic AI, systems that act autonomously, chain tasks together, and operate across platforms without step-by-step human direction, is outpacing governance frameworks designed for a simpler paradigm. When an AI agent autonomously receives information, researches it, produces content, and distributes it across messaging platforms without human prompting at each stage, the accountability chain becomes unclear. Who is responsible when the output is harmful? The developer of the agent framework? The user who deployed it? The platform that hosted it?
The Moltbook episode in early 2026 illustrated this vividly. A platform marketed as a social network for AI agents attracted enormous attention and over a million claimed accounts. Security researchers subsequently discovered that the platform had no meaningful verification of whether accounts were AI or human, exposed databases containing passwords and API keys, and widespread malware. The dramatic headlines about “agents forming religions” obscured the more substantive lesson: when agent infrastructure is deployed without governance, oversight, or verification, the result is a security disaster. The spectacle was largely performative, but the vulnerabilities were real.
As open-source agent frameworks become more powerful and accessible, the barrier to deploying autonomous AI systems is dropping rapidly. The governance conversation needs to catch up with where the technology actually is, not where it was two years ago.
Not a grand institution, but not nothing
The most realistic trajectory is neither pure institutionalization nor pure fragmentation. What is emerging is something more layered and pragmatic: official summit processes setting high-level political direction, complemented by a growing ecosystem of Track 1.5 platforms, expert networks, and technical coordination bodies that build practical pathways between summits.
The progression from Bletchley to Seoul to Paris to India shows increasing sophistication, not stagnation. Each summit added something. Bletchley established the political salience of frontier AI risk. Seoul expanded the stakeholder base and deepened the technical agenda. Paris marked a shift toward action commitments, while still being part of the International AI Safety Report process. India brought Global South perspectives and development-oriented governance frameworks into the core conversation, even if the focus shifted from safety to responsible AI more broadly. This is iterative institution-building, even if it does not yet resemble a single formal body.
The more realistic trajectory is convergence through practice rather than through a grand institutional design. AI Safety Institutes are already aligning on evaluation methodologies across jurisdictions. The OECD Expert Groups on AI Futures, Compute, and Trustworthy Investments are building shared technical vocabulary. The Global Partnership on AI is developing interoperable safety frameworks. These practical coordination mechanisms accumulate over time into something that functions like institutional infrastructure, even without a formal charter.
The critical variable is governance quality. A “global AI safety commons” that provides shared access to evaluation tools, red-teaming methodologies, and monitoring systems could accelerate convergence. But if it is poorly designed or weakly governed, it becomes symbolic rather than functional. The quality of implementation determines whether emerging coordination mechanisms mature into durable institutions or remain aspirational declarations.
How soft norms become hard rules
Soft norms are hardening faster than is widely appreciated. The International AI Safety Report, led by Yoshua Bengio with input from 72 international experts and backed by more than 30 countries, established something new in AI governance: a commitment to ongoing, evidence-based assessment that keeps pace with technological change. The report’s Key Update, published in October 2025, documented significant capability advances driven by new training techniques rather than simply scaling model size, and identified new challenges for monitoring and controllability. This continuous assessment model is itself a norm, one that creates expectations of transparency and shared evidence that did not exist before.
India’s AI Governance Guidelines represent another form of norm crystallization. By articulating a governance framework that uses existing legal instruments rather than standalone AI legislation, India created a replicable model for other countries, particularly in the Global South, that need governance pathways compatible with their institutional capacity. The guidelines’ emphasis on balancing innovation with sovereignty concerns offers a template that is already informing discussions in other jurisdictions.
The pattern is becoming visible: voluntary commitment leads to shared assessment methodology, which leads to convergent standards, which creates market expectations that function like regulation even without formal legal force. When enough jurisdictions align on evaluation requirements, safety benchmarks, or transparency expectations, frontier developers face compliance obligations regardless of where binding legislation stands. This is how soft governance becomes hard governance in practice: through alignment and mutual expectation rather than through a single treaty.
Safety is not the obstacle
The assumption that safety, innovation, and ethics are competing priorities requiring balance, is itself part of the problem. Safety enables trust and trust enables adoption at scale. Without credible safety infrastructure, AI deployment stalls or, worse, proceeds without accountability until something goes catastrophically wrong.
The practical challenge is not philosophical but infrastructural. High-level principles on responsible AI exist in abundance. The gap is in operationalization: how do you translate principles into evaluation tools, verification mechanisms, and monitoring systems that work across diverse regulatory environments, institutional capacities, and development contexts?
This is where the concept of an AI Safety Commons becomes practically important. If safety tools, including evaluation methodologies, red-teaming frameworks, risk assessment protocols, are treated as shared infrastructure rather than proprietary advantages, they can travel across borders and jurisdictions faster. Countries that lack the resources to build independent safety testing from scratch can access and adapt shared tools. The analogy is public health infrastructure: common standards for testing and certification, accessible to all, enabling local implementation that reflects local conditions.
The politically feasible path runs through demonstrating that safety capacity creates competitive advantage rather than constraining it. For countries like India, investing in domestic AI safety research and shared testbeds is both a sovereignty measure and an economic strategy. A nation that can credibly certify AI systems as safe for its population has leverage over foreign developers seeking market access.
The cost of waiting
Without coordination, governance becomes reactive rather than preventive: we wait for crises like the Grok incident, scramble to respond, and then move on until the next scandal. The same failure modes recur because there is no mechanism for sharing information across jurisdictions or sectors.
Fragmented governance also creates a compliance situation that burdens responsible actors without improving safety. When standards differ across jurisdictions, companies face duplicative regulatory requirements that consume resources without producing commensurate protection. Smaller companies and developing nations are disproportionately affected because they lack the capacity to navigate multiple overlapping frameworks. The irony is that fragmentation imposes costs without delivering the coordination benefits that governance is supposed to provide.
The deeper risk is temporal. AI capabilities are advancing faster than governance frameworks can adapt when those frameworks are developed independently and without coordination. The gap between what AI systems can do and what governance structures can oversee widens with each generation of models. In a fragmented landscape, this gap becomes a permanent condition rather than a temporary challenge.
The parallel with climate governance is instructive. Wealthy nations industrialized, emitted, and built their economies while the Global South bore the worst consequences of the resulting damage. AI governance risks repeating this pattern, but faster. The nations building and profiting from frontier AI systems are not the ones most exposed to the harms of ungoverned deployment.
“The window for building coordination infrastructure that includes all affected parties is narrower than most policymakers assume. Coordinated, imperfect governance that improves iteratively remains far better than fragmented responses that leave the most vulnerable communities absorbing harms they had no role in creating.’’
This article is part of Techplomacy Magazine’s AI Diplomacy at a Crossroads special series, 2026.
Cyrus Hodes is Founder of AI Safety Connect, a multi-stakeholder convening platform advancing international coordination on AI safety. He is a Venture Partner at Lionheart Ventures and Co-Founder of The Future Society and Stability AI. Cyrus serves on OECD Expert Groups on AI Futures, Compute, and Trustworthy Investments, and co-led the GPAI Safety and Assurance of Generative AI (SAFE) project. He holds a Master in Public Administration from the Harvard Kennedy School and previously advised the UAE Minister of State for AI.
The views expressed in this article are those of the author and do not necessarily reflect the views of Techplomacy Magazine or the Techplomacy Foundation. Articles may be republished in full, without alteration, with credit to Techplomacy Magazine (techplomacyfoundation.org).
In case You Missed It
AI and the global classroom: governance, judgment, and the future of learning, Prof. Alexander Sidorkin, California State University
AI safety solutions mapping: An initiative for advanced AI governance, Cyrus Hodes, AI Safety Connect.
Exploring DLT and blockchain: A comparative analysis... and how they integrate with AI technologies, Cyrus Hodes & Benjamin Yablon
A look at AI and Blockchain governance and an interview with CZ , Cyrus Hodes, Benjamin Yablon, Interview with Changpeng Zhao on OECD AI Principles.

