The AI revolution isn’t coming. It’s already here.
From drafting emails, summarising research, and generating codes to preparing lecture notes, compiling legal cases or generating images and art, tools like ChatGPT, DeepSeek, Gemini, and Claude are reshaping how we work, learn, and live. Globally, the conversation around AI tends to focus on job loss and automation. But in countries like India, there is a quieter, more insidious, challenge brewing — one that speaks not to displacement of labour but to exclusion from opportunity.
This challenge isn’t about machines replacing humans. It’s about machines not understanding humans — specifically, those who don’t speak English. India is a multilingual nation. The 2011 Census had identified 121 languages spoken by 10,000 or more people as well as the existence of 1,369 rationalised mother tongues. Yet, many of today’s dominant AI models are built to function primarily and perform best in one language alone: English. This is a serious problem in a country like India where English is neither the mother tongue nor the language in which the majority think or argue. According to the India Human Development Survey, only about 5% of Indians are fluent in English, and just 21% have some limited ability.
Those who can engage with AI tools efficiently, who can prompt well, iterate, and interpret machine responses will gain a serious productivity edge. That edge increasingly belongs to those proficient in English. This raises a difficult but necessary question — is AI silently widening the gap between the English-speaking elite and India’s vernacular population?
The digital divide we feared two decades ago is morphing into a linguistic divide. Hindi and Bengali are the fifth and sixth most spoken languages in the world. Tamil, Telugu, Marathi, and Urdu also have their own speaker bases. But when users attempt to interact with AI tools in these languages, the experience is often clunky.
While some AI models now support Indian languages, the interaction is far from seamless. The rhythm, intuition, and contextual fluency that English speakers take for granted are absent. In contrast, when users type in English, AI seems to understand them effortlessly. The result? Millions of Indians who could benefit from AI — students, farmers, entrepreneurs, homemakers — might simply find the interface too foreign, frustrating, or alienating. This is not just a usability problem; it’s an equity crisis in the making.
What makes this more concerning is that it’s entirely preventable. India has the talent, the tech infrastructure, and the data to build AI models that can understand and respond fluently in Indian languages. We have seen promising efforts. The National Language Translation Mission aims to harness AI and Natural Language Processing to make public digital content available in all major Indian languages. Its flagship platform, Bhashini, aspires to bridge the linguistic chasm by enabling real-time translation. But for this to succeed, it needs more than policy ambition — it needs deep research, public-private collaboration, and sustained investment in open datasets and regional language corpora. Equally important is the role of India’s tech ecosystem. There are startups working on vernacular AI. However, these efforts remain scattered without serious institutional backing.
For real inclusion, we need foundational language models that are trained in Indian languages. There must also be AI literacy in regional languages.
This isn’t just a rural issue. Even in urban areas, many students and young professionals from vernacular-medium schools and colleges find English to be a barrier to AI use, hindering their skills, creativity, and curiosity.
This is not India’s challenge alone; Bangladesh, Nigeria, and Indonesia face similar exclusion.
If you’re wondering whether this piece was written by ChatGPT or some other AI model, think about this: could these tools have generated a similar argument in Hindi or Bengali with the same clarity?
Sourish Mustafi is a PhD student at Shiv Nadar University