Is the United States of America facing yet another Sputnik moment? A Chinese company has built DeepSeek, an Artificial Intelligence ‘large language model’ akin to ChatGPT, at a fraction of the cost — a meagre $5 million — compared to the hundreds of millions pumped in by OpenAI, Meta, and Google. It also used far fewer resources as the US has banned the export of A100 and H100 chips to China. However, other than these advantages, DeepSeek is much like existing LLMs. It is not faster than ChatGPT and just as prone to ‘hallucinations’ — the tendency to make up ‘facts’ to fill gaps in its data. In fact, it has an added disadvantage as it refuses to provide answers on issues that China finds sensitive, such as Tiananmen Square and Taiwan. But the true potential of DeepSeek is not to be judged on its technological finesse but on its ability to transform the economics of the AI market. The combination of low costs and openness may help democratise AI technology, enabling others, especially firms from outside the US, to enter the market. What is more, DeepSeek is an open source, which allows others to learn from it and build on it, unlike Silicon Valley entities that guard AI technology as a precious secret. By circumventing the need for Western hardware and capital, China has shown that sanctions can be taken up as a challenge to push innovation instead of stifling it. India, in particular, can draw important lessons from this. If the next phase of AI innovation is about smart design and efficiency rather than scale, India could become a major player.
This is not to suggest that Deepseek is without concerns. Data protection watchdogs in Ireland and Italy have raised questions about DeepSeek’s data processing practices. As a company operating out of China, where data laws differ significantly from those in Europe and the US, there are serious red flags concerning privacy. For instance, DeepSeek’s app collects a vast amount of personal data that it stores on servers in China, which has a dubious record of data exploitation. Moreover, Deepseek’s openness is also laden with risks. Making such a powerful model freely available raises the possibility of misuse with vested interests, from rogue States to criminal organisations, weaponising this technology. Governments around the world will need to work together to create frameworks for the responsible use of AI while balancing innovation and security even as new AI products emerge in the market. The Artificial Intelligence Action Summit in Paris next week thus comes at an opportune moment.