They Don’t Work, Here’s What Does

They Don't Work, Here's What Does

Over the past 20 years, there has been one common way to distinguish humans from robots. It’s a capture test. These annoying and annoying picture-based tests are staring at blurry photos of mundane artifacts, from traffic lights to buses and bikes, trying to determine which boxes make up the entire image. On the surface, I have managed to solve one meaning. You were human and not a bot in disguise, you deserved to go through the internet gate to display the content behind the test. And everything was going well in the world. Until it isn’t.

Today, things aren’t as easy as they used to be. Bots and AI agents get smarter each day, but today they are at a level where solving image-based tests is an easy feat. For context, a group of researchers at the University of California recently discovered that artificial intelligence (AI) bots are even more proficient in solving captures than humans.

To curb this issue, developers have resorted to making capture testing difficult to keep bots out. But it’s a zero-sum game, and more difficult tests only make the human online experience worse, and AI is better at solving them.

It is becoming increasingly clear that the only way to counter this problem is to replace current models with newer, better models. If you buy the key and the burglar keeps breaking it to enter your home, then don’t keep buying other, expensive locks. Instead, pivot towards other alternatives and lock them out. Similarly, web developers need to adopt a new approach to identity verification over the Internet.

Ai ate a capture

Captcha presupposed the simple truth that machines struggled with pattern recognition tasks that naturally brought to people. That advantage has collapsed.

With advances in computer vision, reinforcement learning and large-scale language models, modern AI is better at solving captures than most humans. Image recognition systems routinely find crosswalks or bikes with near perfect accuracy. Action bots can mimic mouse movement and timing patterns to deceive detection systems. Multimodal language models can analyze distorted text that once confusing software. In authentic testing, bots register accuracy by more than 95%, but humans are often much lower and slower due to fatigue, poor design, or accessibility challenges.

This inversion created a twisted arms race. Each new capture becomes more difficult as it trips the machine, but it only makes it more challenging for humans too. The result is not security, but the website will defeat the real users and the most sophisticated bots will almost slip.

Recent events show just how vulnerable the system has become. In mid-2025, Openai’s new ChatGPT agent bypassed CloudFlare’s “I Am Not A Robot” check without detecting it. A year ago, researchers from Eth Zurich demonstrated an AI model that can solve the challenges of Google’s Recaptcha V2 image 100% success. These are not isolated cracks – signs that the entire Captcha premise has collapsed.

Online identity is growing old problems designed to solve. Stopping bots from claiming free email accounts was once a central challenge. Today, interests are much higher due to the integrity of the financial system, the credibility of elections, and the distribution of humanitarian aid in response to knowing who is not, rather than real people.

Captchas was never built to handle problems at this scale. They can rule out rough spam bots, but are powerless against fake accounts, automated propaganda networks, or coordinated military forces of deepfake-driven impersonation. The same generation AI that shreds image puzzles can produce endless synthetic identities and also amplify disformations or game online systems at will. In this context, the “Prove that you are not a robot” checkbox feels like a lock on a screen door.

A fundamental change is needed now. We need a system that can establish humanity without the need for all other disclosures. It means privacy by design, protection of basic rights, and simple usability enough to be adopted by anyone. If we cannot verify our personality in a reliable and humane way, the digital systems we rely on will continue to be eroded under the weight of synthetic actors.

I’m on a better path

If the capture marks the end of the era, the evidence of personality can mark the beginning of something new. The goal is not to reinvent web puzzles, but to establish a higher-order trust layer, a way to ensure that there is a real human being without further demand.

Passports provide useful analogies. It doesn’t reveal your entire life story at the border, it simply confirms that you are the person you claim to be, and that you stand as a person in a recognized system. Digital proof of personality can play a similar role online. Instead of distorted text or image grids, it works on the following principles:

  • Human First and Rights Saving: Designed with a focus on dignity and accessibility, not friction.
  • Available across contexts: from financial transactions to humanitarian assistance to democratic governance.
  • Privacy Introduction: Prove that “real person is here” without releasing biometric data, identification or other confidential details.

Similarly, when your passport unlocks the cross-border trust, the digital proof of personality can unlock the trust of the network. It provides a path from the arms race between bots and captures, replacing brittleness testing with a durable foundation for verifying humanity itself.

Kill captures and build human trust

Capture collapse is more than technical inconvenience, it’s a signal. For 20 years, we trusted these puzzles to keep the internet human, but AI has surpassed them. The challenge for the future is not to do more difficult tests, but to build a better foundation.

Proof of personality shows the way. By treating humanity as a right to verify rather than a hurdle to clear, we can protect the most important systems, such as everyday digital spaces where finance, governance, aid, and trust are currency. The lessons from the capture era are clear. A brittle defense breaks under pressure. Lessons from the passport age are as clear as durable identity systems, built with rights at the heart of it, and can last generations.

The question is not whether you can keep bots out. AI continues to get smarter. The question is whether we can design systems that are visible, respected and trustworthy across the network. That’s the real test. And that’s something we can’t afford to fail.

Leave a Reply

Your email address will not be published. Required fields are marked *