The India AI Governance Guidelines released by the ministry of electronics and information technology outline how India intends to manage the promise and the peril of Artificial Intelligence. The framework rejects the need for a new AI law for now, arguing that existing legislation such as the Information Technology Act, the Digital Personal Data Protection Act, and consumer protection laws can address AI-related risks. It favours a light-touch approach that encourages voluntary industry commitments and embedded accountability within the system rather than relying on punitive regulation. While this approach is meant to encourage innovation, it is also imperative to recognise AI’s potential to amplify inequities and disrupt democratic and social norms. The central question, thus, is whether a governance model that leans heavily on voluntary compliance and existing laws can safeguard citizens from the excesses of unregulated AI development. This is also the query that is posed by Stanford University’s seminal AI100 report, which underlines that governance structures often lag behind technological innovation.
The most important aspect of the guidelines is perhaps the insistence on human oversight. This aligns India’s stance with global trends that place ethical limits on automation and decision-making without human accountability. Equally vital is the demand for transparency to address AI’s ‘black box problem’ — the lack of clarity on how AI systems make decisions. The guidelines ask that regulators must be able to see how systems are built, who operates them, and how data and computing resources move through the value chain. Yet, compliance to these norms would be voluntary. This model stands in contrast to the European Union’s Artificial Intelligence Act, which enforces stringent, risk-based regulation through binding legal obligations. Over-reliance on voluntary compliance may leave citizens vulnerable, turning the framework into a statement of intent rather than a mechanism of protection. Although the framework’s pro-innovation stance is economically appealing, it does not sufficiently address the socio-political implications of AI misuse — from Deepfakes to algorithmic
discrimination in digitised welfare distribution systems. This predicament of balancing innovation with safeguards, the AI100 report highlights, is universal as tangible regulation beyond data use remains rare across the world.
The Indian guidelines also propose an institutional framework comprising an
AI Governance Group, a Technology and Policy Expert Committee, and an AI Safety Institute to coordinate research, safety evaluation, and policy oversight. The AIGG will be a body comprising of five Central ministries and representatives from the Telecom Regulatory Authority of India, the Competition Commission of India, the Reserve Bank of India and the Securities and Exchange Board of India, among others. Centralising authority within a small inter-ministerial body like the AIGG increases efficiency,
but it also concentrates power, creating the potential for political interference in technical or ethical determinations.
The India AI Governance Guidelines present an optimistic roadmap for the country’s engagement with AI. But if India aspires to lead in global AI governance, its frameworks must evolve beyond voluntary ethics towards enforceable accountability.