MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Monday, 04 August 2025

IIT-Guwahati develops sensor to turn air exhaled from mouth into voice commands

In cases where individuals cannot produce sound, attempting to speak generates airflow from their lungs

PTI Published 04.08.25, 06:02 PM
Indian Institute of Technology Guwahati.

Indian Institute of Technology Guwahati. Shutterstock picture.

New Delhi, Aug 4 (PTI) Researchers at Indian Institute of Technology (IIT), Guwahati, and Ohio State University, USA, have developed an underwater vibration sensor that enables automated and contactless voice recognition.

The sensor, they said, offers a promising alternative communication method for individuals with voice disabilities who are unable to use conventional voice-based systems.

ADVERTISEMENT

The findings of the research have been published in the journal Advanced Functional Materials.

According to Uttam Manna, Professor, Department of Chemistry, voice recognition has become an integral part of modern life of late.

"It helps users in operating smart devices including mobile phones, home appliances and other devices through voice commands. However, for the people with voice disorders, this technological development remains inaccessible.

"Recent studies show that a noticeable percentage of children and young adults aged between 3 and 21 experience some form of voice disability, underscoring the significant need for more inclusive communication technologies," Manna said.

To address the limitation, the research team found a solution by focusing on the air exhaled through the mouth while we speak, a basic physiological function.

In cases where individuals cannot produce sound, attempting to speak generates airflow from their lungs. When this air flows over a water surface, it produces subtle waves.

The research team developed an underwater vibration sensor which can detect these water waves and interpret speech signals without depending on audible voice, thus creating a new pathway for voice recognition.

Manna explained that the developed sensor is made from a conductive, chemically reactive porous sponge.

When placed just below the air-water interface, it captures the tiny disturbances created by exhaled air and converts them into measurable electrical signals.

The team used Convolutional Neural Networks (CNN), a type of deep learning model, to accurately recognise these subtle signal patterns. This setup allows users to communicate with devices from a distance, without the need to generate sound.

"It is one of the rare designs of material allowing to recognise voice based on monitoring the water wave formed at air and water interface because of exhaling air from mouth. This approach is likely to provide a viable solution for communication with those individuals with partially or entirely damaged vocal cords," Manna said.

On a lab-scale, the working prototype costs Rs 3,000, he said.

"With research exploring potential industry collaboration for bringing the technology from lab to real world use, the cost of the final product is expected to reduce," he said.

The research team now plans to get clinical validation for the device. Further, the team plans to collect more datasets from individuals with voice disabilities who can articulate different words necessary for operating home appliances and other voice-commanded smart devices.

"Using these datasets, the research team will be able to refine the developed model for recognising specific words or phrases when exhaled air is directed over a water surface.

"This development holds potential beyond voice recognition. Other than hands-free operation of various devices, the developed sensor can also be used in exercise tracking and movement detection.

"Additionally, its proven durability, remaining stable after extended underwater use, suggests potential applications in underwater sensing and communication," Manna said.

Except for the headline, this story has not been edited by The Telegraph Online staff and has been published from a syndicated feed.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT