India’s Artificial Intelligence (AI) Summit conference in New Delhi – that kicks off Monday with the motto ‘Welfare for All, Happiness for Al’' – is getting global attention. Meanwhile, an AI intensification will be underway in Maharashtra. This blends government ambition, big technology companies, or Big Tech muscle, and the world of surveillance.
The BJP-led Mahayuti alliance, fresh from its January 2026 civic polls manifesto, has charted a path for technology-driven governance in the state. Among their pledges: AI labs in municipal schools and an AI application designed to “free Mumbai” from the presence of so-called “illegal Bangladeshis” and Rohingya refugees.
The government claims a partnership with IIT Bombay is shaping the initiative to locate individuals it considers who do not have a valid right to be in the city, but the inner workings of the AI system remain opaque.
When The Telegraph Online approached IIT Bombay, they offered no comment. The AI technology’s test drive has come close on the heels of a disturbing rise in harassment of Bengali migrant workers, as the lines between ‘Bengali’ and ‘Bangladeshi’ blur in the central government's drive to root out illegal migration.
Between May and June 2025, Human Rights Watch reports indicate that 1,500 Muslim men, women and children were unlawfully pushed into Bangladesh. In July, The Telegraph reported that at least six Bengalis, members of the Matua community, were detained and harassed by Maharashtra police on suspicion of being Bangladeshi nationals.
In similar raids across Odisha, Chhattisgarh, Delhi, Gujarat and Madhya Pradesh, Bengali migrant workers have been rounded up, detained, and in some instances, forcibly deported to Bangladesh. Rohingya refugees have also been targeted in the same manner.
According to data released by the West Bengal Migrants Welfare Board, 12 Bengali migrant workers have been murdered in other states in recent times, 10 of which have been reported from BJP-ruled states.
The Maharashtra government claims a 60 per cent success rate for the AI app, meaning out of every 10 suspects flagged, four are wrongly identified. For Indians whose Bengali accents mirror those of Bangladeshi citizens, this technology is a chilling prospect: telephonic conversations sorted, classified, and judged by algorithms.
Manufacturing evidence
“Doesn’t matter what AI says. It might say a 50 per cent match of accents, yet the police might say you are an illegal Bangladeshi or Rohingya,” notes cybersecurity expert Srinivas Kodali speaking to The Telegraph Online. “Using an application like this is manufacturing evidence according to a predetermined logic by a police official.”
According to an IIT researcher, who wished to remain anonymous, the app likely listens in on telephone conversations, matching voice samples and accents, and presenting probabilistic guesses as hard facts.
“It is a deeply problematic application because it tries to present probabilistic inferences as a set of deterministic facts,” the researcher says, echoing Kodali.
What emerges, experts say, is less a tool for justice and more a sweeping surveillance system. Who builds the technology and where the data are embedded after they are collected are questions that follow immediately and beg for answers.
A speech recognition tool, regardless of accuracy, raises the question about whether it can ever be effective. Such AI driven tools have been piloted in EU countries but has raised concerns on accuracy already.
Kodali draws parallels to the US, where the Immigration and Customs Enforcement (ICE) uses AI to monitor and track immigrants. ICE employs voice, video, and text analysis, feeding data into social-media and communications-monitoring platforms. Palantir’s ImmigrationOS is among the technologies used by ICE to track migrants, underscoring a global trend of governments wielding AI in the name of security and border control.
Legal foundations, data-privacy dilemmas
India’s Criminal Procedure (Identification) Act, 2022, grants police the broad authority to collect biometric data, including DNA, iris scans, and voice samples. This legal scaffolding supports the AI-driven identification of migrants.
Kodali believes the initial voice database likely stems from Bangladesh’s cyber infrastructure, itself built with Indian assistance: according to the Tech Global Institute, Bangladesh’s turn towards advanced surveillance was assisted among other countries by India.
While the specifics of the app’s methodology remain uncertain, one fact is clear: surveillance and potential data privacy violations are rampant.
The Digital Personal Data Protection (DPDP) Act of 2023, fully operationalised in 2025, was meant to safeguard digital personal data. “It’s a landmark law that requires informed consent and there are security safeguards. This Act should ensure that data are not collected without consent, and this AI application seems clearly in violation of the Act,” says another researcher in digital technology and societies, speaking anonymously.
However, there have been amendments introduced in the DPDP Act of 2023, as pointed out by the Internet Freedom Foundation, “that blocks access to any information labelled as ‘personal information’, allowing officials to deny critical information".
The data safeguards seem sufficiently weakened.
Human rights and transparency advocate Venkatesh Nayak has petitioned the Supreme Court of India challenging what he calls “excessive provisions of the DPDP Act including regressive amendments made to the RTI Act, 2005”.
Big Tech, governments, and the AI race
India’s aspirations to become an AI powerhouse are cemented in the recent Union Budget, which promises a 20-year tax holiday till 2047 to foreign companies establishing data centres in the country. The year 2047 is also anchored to the vision of the Viksit Bharat (Developed India) roadmap. This Budget move is seen as an open invitation to Big Tech.
December 2025 saw Microsoft announce a $17.5-billion investment in India, with plans for the largest hyperscale presence and a new data centre launching by mid-2026. Microsoft has had three land deals in Pune to the tune of at least Rs 1,000 crore for setting up data centres.
“Microsoft has been a co-pilot to making AI a reality in India from boardrooms to classrooms, commerce to communities, and finance to farmers”, says Puneet Chandok, president of Microsoft India and South Asia.
Since April 2025, Maharashtra police have deployed MahaCrimeOS AI, a crime investigation platform powered by Microsoft Foundry. Microsoft CEO Satya Nadella, in a December statement, emphasised that digital security could not be viewed in isolation from sovereignty, a sentiment echoed in the tightrope India walks between technological progress and civil liberties.
“The state feels the need to intervene and doesn’t know how to go about it,” a digital technologies and society researcher notes. “And there is a collusion between government and Big Tech. We’ve seen this in Europe, how they are manning their borders, and in the US in how ICE uses AI to locate illegal immigrants.”
According to Amnesty International, the big technology companies outside of China, Meta, Google, Amazon and Apple, influence how one uses and accesses the internet. The day-to-day dependency makes it impossible to stop using them. This lack of ability to opt out gives these companies power to dictate digital engagement rules that can harm human rights.
Watchtower’s warning, Bangalore to Mumbai
In 2019, some apartments in the IT city of Bangalore started quizzing Bengali-speaking domestic workers to spot the difference in dialects. Unknown to the workers, the residents decided that those with an East Bengali accent should be marked as illegal migrants and be evicted. The same year, Karnataka police started crackdowns on Bengali-speaking migrant workers.
By 2019, Wipro, another Big Tech in Bangalore, had started sorting and classification of Indian citizens in Assam for the digital update of the National Register of Citizens (NRC). At least 1.9 million individuals were excluded from the updated register, leaving them in various states of potential statelessness and precarity.
At least 100 individuals died by suicide out of fear of eviction. By 2016, Wipro’s NRC project won the Digital Innovation in Citizen Services (e-governance) Award, given by CISCO and CNBC TV-18.
The introduction of the AI app in Maharashtra was preceded by an announcement in March last year, of building a larger detention centre to accommodate illegal Bangladeshi migrants. In addition, builders and contractors have been asked to be on the lookout for construction workers who might be Bangladeshis.
As Maharashtra moves forward with its AI-driven migrant detection, the spectre of manufactured evidence, wrongful identification, and mass surveillance looms large. The collaboration between government and Big Tech is reshaping the contours of privacy, citizenship, and governance.
As India gears up for what it projects as the biggest AI summit in the global south, India’s push to be the leader has to address in the words of the Summit, both the “promise and the perils” of AI. Incorrect identification of individuals driven by an AI vision is one such fear; and yet the cycle of paranoia and hunt for outsiders continues.



