MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Sunday, 11 January 2026

AI and multilateralism

If left ungoverned, AI can deepen inequalities and destabilise economies. But if governed wisely, it could become a shared instrument of progress and a new chapter in global governance

Aparaajita Pandey, Shashi Bakshi, Anshumaan Mishra Published 10.01.26, 08:13 AM
Representational image

Representational image Sourced by the Telegraph

As India prepares to host the India AI Impact Summit next month, its rising importance in geopolitical horizons has been highlighted even further. As the Global South tries to find its position in the race towards building the greatest AI systems, it is imperative to examine how the future of global decision-making would look with the advent of AI.

AI is not a distant promise of the future any longer; it has already become the defining force of the times we live in and is now on its way to shaping geopolitics, economics, security, and even the moral boundaries of human decision-making in an unprecedented manner. However, as AI systems proceed ahead, the frameworks meant to govern them lag dangerously behind. The problem is not merely technological; it is also political. We stand at the precipice of a transformative technology guiding our systems without a referee, without borders, and without a shared playbook.

ADVERTISEMENT

One of the consequences is the widening of the digital divide. A handful of countries and corporations control the leading edge, while much of the world risks being diminished to be the audience of the very revolution that will influence their futures. In that sense, AI is not just a question of invention; it is a question of power. Humanity stands at a precipice: AI could become our greatest collective tool, or it could deepen divisions and disrupt the global order. Whether it becomes one or the other will depend on the kind of multilateralism we can build before technology overtakes our ability to govern it.

Amid this commotion, the United Nations is discreetly attempting to put together a global framework for AI governance, one that seeks to combine scientific grounding with political inclusivity. This emerging architecture is indicative of a rare effort to rebuild multilateral trust in an era of fragmentation. At its core are three interlinked ambitions; basing AI policy in science rather than fear or hyperbole; bringing states and innovators into a shared dialogue; and safeguarding the inclusion of the Global South.

The first of these goals is achieved through the newly established Independent International Scientific Panel on Artificial Intelligence, a 40-member body tasked with providing an evidence-based analysis of the challenges and the prospects that AI offers. In a policy landscape dominated by extremes — some are predicting utopia, others are prophesying apocalypse — the UN’s goal is to create a space where science, not sensationalism, controls regulation. The panel’s first report, expected this year, will likely serve as the most reliable global benchmark yet for AI safety and governance.

The second pillar is the Global Dialogue on AI Governance, an annual forum set to debut in Geneva in 2026. It aims to build trust in order to get governments, corporations, civil society, and academics to the same table to negotiate the norms of transparency, ethics, and safety. The rationale is that before states can legislate collectively, they must first learn to listen collectively.

The third and the most transformative factor is the UN’s proposal to build AI capacity in the Global South through a $3 billion Global AI Fund, which is aimed at parity. Without access to computing infrastructure, data ecosystems, and skilled human capital, developing countries cannot significantly contribute to shaping global AI systems. The scale of inequality is staggering: Africa collectively possesses fewer GPUs than other regions despite its size and similar demographic dividend, while one American company, Meta, reportedly uses over 250,000.

Worked upon collectively, these three targets could form the world’s first coherent global layer of AI governance, a system designed not to replace national laws or private-sector codes but to link them. The hope is that such a framework could establish common principles, shared responsibilities, and recognise the collective nature of the challenge.

But progress remains uneven. At the moment, AI regulation resembles a crowded marketplace of competing philosophies. The European Union’s AI Act is the most ambitious legal framework to date, while the United States of America relies on executive orders and voluntary industry commitments. The OECD, ISO, and ITU are drafting their respective technical standards, even as China advances its own model of ‘algorithmic governance’. The result is a spaghetti bowl of overlapping rules and rival philosophies that threatens to further deepen global divisions.

This fragmentation poses a real risk wherein countries lower safety standards to remain competitive. The UN is attempting to address this through an International Standards Exchange, a mechanism to link diverse standard-setting bodies and promote interoperability. The problem is that AI does not exist in a political vacuum. Every model, dataset, and algorithm is intertwined with geopolitical rivalry. This has created a contradiction: the technology needs cooperation to ensure safety and stability, which is antithetical to the nature of global politics.

Despite this, there are flickers of optimism. The UN’s AI initiatives are being shaped not by superpowers but by middle power countries, such as Sweden, Spain, Zambia, and Costa Rica, that are championing dialogue over division. AI requires a preventive diplomacy framework that encourages collaboration before accidents, misuse, or weaponisation.

The rise of open-source AI reveals both the promise and the peril of democratisation. Open-source models can empower small states, researchers, and innovators. At the same time, it can make it easier for malicious actors to misuse the technology. Rather than blanket bans, the world needs a more nuanced understanding of openness, which can differentiate among model weights, datasets, and APIs, and develop norms for safe sharing.

The ability to access and process gigantic amounts of data has become the new currency of power. The UN’s projected Global AI Capacity Development Network envisions a ‘compute commons’ that could link idle resources across borders, allowing smaller states to access shared computing power. This could establish a minimum, irreducible, national computing capacity that every country should acquire to participate in the AI economy.

To organise this growing web of initiatives, the UN has founded the Office for Digital and Emerging Technologies. The ODET functions as both a policy lab and a coordination hub, connecting organisations like UNESCO, ITU, OHCHR, and WMO under one digital umbrella. It also represents a philosophical shift: the UN is not locating itself as a bystander but as an active steward of technology.

Nevertheless, barriers still remain. Geopolitical mistrust between the US and China continues to limit cooperation; bureaucratic inertia and private-sector dominance twist incentives away from public accountability; capacity gaps in developing regions continue to fester.

Despite these challenges, the case for multilateralism has never been more important. AI’s impact is exponential. If left ungoverned, it could deepen inequalities, destabilise economies, and corrode democratic institutions. But if governed wisely, it could become a shared instrument of progress and a new chapter in cooperative global governance.

Aparaajita Pandey is a global energy strategist & geopolitical analyst. Shashi Bakshi is author and Managing Director in a global strategy consulting firm. Anshumaan Mishra is a lawyer and strategy consulting professional. Views are personal

Follow us on:
ADVERTISEMENT
ADVERTISEMENT