Not just agomoni songs and plays, the puja stage was also used for serious discussions — on artificial intelligence (AI). On Ashtami evening, New Town CD Block hosted a symposium on the impact of AI on the job market.
The event was moderated by IT consultant Mayukh Mitra, who opened with a question for the audience: “Isn’t the term Artificial Intelligence itself an oxymoron? Intelligence is not the manipulation of data or the mimicry of syntax per se but an inherently human synthesis of perception, intention, emotion, and judgment.”
The first speaker, Santanu Ray, a former professor in the department of metallurgy at IIT Chennai, contended that the fascination with AI often stems from a failure to distinguish between reasoning and calculation, between semantic understanding and statistical correlation. “What we are witnessing is not cognition but simulation, an imitation of thought devoid of intentionality or ethical instinct,” Ray said. “Machines can learn patterns, but they do not know. They cannot weigh competing values or grasp the moral stakes of a decision. What AI enables is not autonomous intelligence but amplified predictability, a useful but fundamentally narrow augmentation of human capacity.”
Every move you make
Also speaking was Pratyay Mukherjee, a director of cryptography research and an expert on the interface between AI, data integrity, and state power. He warned that algorithmic inference is increasingly being used to surveil citizens, often without informed consent. “We already have systems that can probabilistically determine political leanings from one’s social media behaviour. But what follows is not abstract: it’s denied visas, blocked scholarships, stalled careers,” he said.
In his view, the real danger is not just that these systems can be wrong but that their decisions are treated as non-contestable. Unlike human discretion, algorithmic judgments lack avenues for appeal. With India still without a comprehensive AI governance framework, he pointed to Europe’s Digital Services Act and AI Act as models India must study with urgency.
Kaushik Biswas, director of IT delivery at a technology consultancy firm, dismantled the myth of AI as an omniscient, disembodied intelligence. “We must remember that the human brain runs on about 15 to 20 watts of power. To simulate a fraction of that in an AI system, we require data centres consuming megawatts,” he said. “While undeniably powerful in repetitive, high-volume decision-making tasks, AI remains deeply inefficient, non-generalisable, and entirely dependent on the interpretive labour of humans. The challenge is not to resist automation but to ensure that we remain ethically embedded in the systems we design.”
From the audience, Kallol Bhowmick, an IT professional from CD Block, posed a question on the rising vulnerability of Large Language Models to data poisoning, the deliberate introduction of skewed or malicious data into AI training sets. Mukherjee acknowledged the severity of the threat, explaining that researchers are developing countermeasures such as digital watermarking, model provenance tracing, and cryptographic back-checking, though these are still in their infancy. “We must build AI systems with the presumption that they will be attacked,” he said.
Asim Ray, an educationist from the block, asked if academic institutions were prepared to equip students for a labour market where skills become obsolete every five years. “No,” Biswas answered candidly. “We are still teaching children to recall information that AI can retrieve in milliseconds. We need curricula that cultivate curiosity, adaptive reasoning, and interdisciplinary fluency.” Santanu Ray added that educational reform must focus not just on STEM proficiency but on nurturing precisely those human faculties that AI cannot replicate — moral imagination, empathic communication, critical discernment…..
A consensus gradually emerged that the future of work will be defined not by a battle between humans and machines but by a complex network where each must compensate for the other’s blind spots. AI will undoubtedly displace a significant volume of routine tasks but in doing so, it will generate new categories of labour — roles like prompt engineers, AI ethicists, policy analysts, and systems integrators, requiring a hybrid literacy that combines technical skill with human insight. As Mitra, the moderator, summarised: “The jobs we lose will be the jobs whose value lies solely in repetition. The jobs that emerge will demand that we become more human, not less.”