MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Thursday, 08 January 2026

When screens fade into the background and AI starts doing the work

The operating systems and apps we have grown accustomed to on smartphones and tablets are expected to fade into the background, as AI assistants begin doing things for users, driven largely by voice commands

Mathures Paul Published 07.01.26, 11:22 AM
Apple to OpenAI, all big tech companies are working towards a voice-driven future.             Picture: iStock

Apple to OpenAI, all big tech companies are working towards a voice-driven future.  Picture: iStock

The operating systems and apps we have grown accustomed to on smartphones and tablets are expected to fade into the background, as AI assistants begin doing things for users, driven largely by voice commands. No, this is not some distant future; it is closer to what 2026 has in store. From Apple and OpenAI to Samsung, almost every major technology company is strengthening its AI assistant ambitions. This technology is set to become the centrepiece of how smart glasses and similar devices function.

Take OpenAI, the company behind ChatGPT. The Information reports that the firm will announce a new audio language model in the first quarter of the year, eventually leading to an audio-focused physical device.

ADVERTISEMENT

So far, audio has been an area of failure for most tech companies, lagging far behind written text. The past two years alone have seen some expensive misfires. The makers of the Humane AI Pin, for instance, burned through hundreds of millions of dollars before logging out. Meanwhile, the Friend AI pendant — a necklace designed to record daily life — has only fuelled privacy concerns. That, however, has not stopped others from pursuing the voice-first path. Start-ups such as Sandbar, along with a project led by Pebble founder Eric Migicovsky, are developing AI rings that would quite literally allow users to talk to their hand.

OpenAI’s upcoming audio model is reportedly capable of handling pauses and even speaking while a user is talking. It is expected to underpin a range of devices, including a screen-less smart speaker designed to act as a companion — all part of a broader push to push screens into the background.
Former Apple design chief Jony Ive, who is assisting OpenAI’s hardware ambitions following the company’s $6.5 billion acquisition in May of his firm io, has a formidable task ahead.

None of this is far removed from Apple’s broader vision with Apple Intelligence. While there have been a few missteps in the implementation of the AI service, the tech giant is expected to set things right in 2026, chiefly through a revamped Siri.

An Apple Intelligence-driven Siri is expected to handle voice commands with far greater sophistication. Reports suggest Apple has opened up Siri to developers to make it more context-aware when responding to requests, using the App Intents system.

At present, Siri can open apps via voice commands that trigger Shortcuts, but App Intents significantly expands that vision. Users could, for example, edit photos, add social media elements or purchase items without touching the iPhone screen. Banking services may remain excluded, as such interactions tend to involve sensitive information.

For smart glasses, voice-first operation is crucial. Last year, Meta chief Mark Zuckerberg wrote on the company’s website: “Glasses that understand our context because they can see what we see, hear what we hear, and interact with us throughout the day will become our primary computing devices.”

Ray-Ban Meta smart glasses, equipped with a camera, speakers and a microphone, are already helping Meta’s AI assistant stand out. Meta AI allows users to ask questions about what they are looking at, from zoo animals to historical landmarks.

Start-ups such as Limitless AI, which makes an AI pendant that clips to clothing to record conversations and generate automatic transcripts, believe wearable recorders paired with an AI coach can give people extra mental bandwidth to be more effective at work and at home.

Samsung, too, is shaping its voice assistant strategy. Bixby can already be used to get things done. “We have a very open-minded approach. We provide Bixby and we also provide Gemini. We ask the consumer to choose what they want. Voice-first, in a way, is like an AI phone. The S24 was the starting point, which we call an AI first-generation phone. When it becomes completely voice-first depends on research breakthroughs. A lot of research is going on at the moment. We had many tough challenges in making AI models run on the device. AI-native platforms need to emerge. Then we can think about completely voice-first phones. Detailed research is under way,” Mohan Rao Goli, managing director of SRI-Bengaluru, previously told this newspaper.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT