Modern artificial intelligence (AI) technologies have called into question the effectiveness of the usual tools for protecting against robots on the Internet. Passing captcha tests (Captcha), designed to distinguish humans from machines, no longer cope with this task, claims The Conversation. Today, bots are able to solve these puzzles faster and more accurately than humans.
Captcha, which appeared in the early 2000s, was invented by scientists from Carnegie Mellon University. It was originally developed to protect websites from automated programs – bots that created fake accounts or, for example, bought tickets and distributed spam. The principle of operation was very simple: a person had to perform a task that was easy for humans but difficult for machines.
The first version of CAPTCHA asked users to enter letters and numbers. Later, in 2007, ReCaptcha appeared, which added words to the tasks. In 2014, Google released ReCaptcha v2, which is still the most popular. It offers either to check the box “I'm not a robot” or to choose the correct images, for example, with bicycles or traffic lights.
However, AI systems have learned to bypass CAPTCHA. Computer vision and language processing technologies allow machines to easily read distorted text and recognize objects in images. For example, AI tools such as Google Vision and OpenAI Clip solve such tasks in fractions of a second, while a person needs much more time. And this is already becoming a problem in real life. Bots are used to buy up tickets for sports matches or to make mass reservations for seats, thereby depriving ordinary users of access to buying tickets. For example, in the UK, automated programs mass reserve places for driving tests, then resell them at a high markup.
However, developers are trying to adapt to new challenges. For example, in 2018, Google introduced ReCaptcha v3, which no longer requires users to solve puzzles. Instead, the system analyzes behavior on the site – cursor movement, typing speed, and other details that are characteristic only of humans.
However, it turned out that such methods are not ideal. First, they raise data privacy issues, as they require collecting information about users. For example, some sites have already started using biometric data to verify users, such as fingerprints, voice commands, or facial recognition.
Second, even these systems can already be bypassed by advanced AI, and with the advent of AI agents – programs that will perform tasks on behalf of users, the situation may become even more complicated. In the future, sites will need to distinguish between «good» bots that work for the benefit of users and «bad» bots that break the rules. One possible solution could be the introduction of digital certificates for authentication, but they are still under development.
That is, the fight between bots and protection systems continues. Captcha, which was once a reliable tool, is losing its effectiveness, and developers must find new ways of protection that are both convenient for users and inaccessible to attackers.
Dorota Wellman surprised viewers on "Dzień Dobry TVN". What happened? Holidays on "Dzień Dobry TVN"…
The EU High Representative for Foreign Affairs and Security Policy, Kaia Kallas, stated that the…
The Samsung Galaxy S25 Slim promises to become a new hit among smartphones. This device…
Mini Mini Golf Golf Golf, recently released on Steam, at first glance looks like a simple,…
Currently, law enforcement officers are investigating all the circumstances of the fraud. In Kamianets-Podilskyi, scammers…
American businessman and nominee for the position of head of the Department of Government Efficiency…