You’re trying to book concert tickets before they sell out. You click the link, and before you can make the payment, you’re asked to identify traffic lights, bicycles or blurry crosswalks in a grid of tiny images.
Again.
For many people, this has become a routine part of life. Logging into financial apps, shopping online or creating accounts increasingly involves “proving you are human”.
These systems are known as CAPTCHA. Why are they everywhere? The short answer is that websites are fighting a rapidly escalating war against bots: automated software that imitate human behaviour online. And thanks to advances in artificial intelligence (AI), those bots are becoming even smarter, cheaper and harder to detect than ever before.
Why websites need proof you are human
Huge amounts of online traffic now come from automated systems. Some are helpful, such as search engine crawlers indexing pages for Google search.
Others are far less welcome, and may involve phishing, spam, fake accounts, passwords violation, misinformation, and distributed denial of service attacks overloading web servers. In some areas, AI agents now generate automated online traffic that exceeds human traffic altogether. Modern AI systems can generate convincing text, imitate browsing patterns and even solve some CAPTCHA puzzles.
At the same time, companies are increasingly worried about bots scraping online content to train AI systems.
As a result, more websites are adding verification systems simply to keep abuse under control.
How CAPTCHA actually works
CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”. The original idea was simple: give users a task humans find easy, but computers find difficult.
Early CAPTCHA systems often involved distorted text. Later versions switched to image-recognition tasks such as selecting all the squares containing traffic lights or bicycles.
Google’s reCAPTCHA became one of the best-known examples. Earlier versions even helped digitise books and improve street-view image recognition while users solved puzzles.
But computer vision has improved rapidly in recent years. Advances in AI mean bots can now solve many traditional CAPTCHA challenges surprisingly well. Researchers have repeatedly shown that modern AI systems can bypass some CAPTCHA systems with high success rates.
That is why today’s CAPTCHA systems rely less on puzzles and more on behavioural analysis.
When users click the CAPTCHA link, the system analyses many background signals, such as mouse movements, typing speed, IP addresses, device information, and interaction timing that reflect human behaviours. Humans tend to behave in inconsistent ways. Bots are usually more predictable.
If the system is sufficiently confident you are human, you may never see an image puzzle at all. But if something appears suspicious, the system may trigger harder tests.
Moving beyond traditional CAPTCHA puzzles
While some bots now use AI capable of solving image-recognition tasks, others simply outsource CAPTCHA solving to cheap human labour services, where real people complete challenges for a small payment.
This has turned CAPTCHA into an ongoing arms race. That may explain why CAPTCHA tests often feel harder and more frustrating than they used to.
As AI continues to improve, websites will likely move beyond traditional CAPTCHA puzzles. Future systems may increasingly rely on behavioural biometrics, such as typing rhythm or scrolling style, device verification systems, invisible background risk scoring, and AI systems designed to detect other AI systems.
In many cases, users may no longer even notice the verification process happening.
CAPTCHA tests may seem like a minor annoyance, but they reflect a much larger paradigm shift online.
For decades, websites largely assumed visitors were human. Increasingly, that assumption no longer holds. As AI-generated traffic continues to grow, proving we are human online may become an even more common part of everyday life.
The Conversation





