Over the last few months, tools like OpenClaw have shown what tech-savvy AI users can do by setting a virtual cadre of automated agents on a task. But that individual convenience can be a DDOS-level pain for online service providers faced with a torrent of Sybil attack-style requests from thousands of such agents at once.
Identity startup World thinks its “proof of human” World ID technology can provide a potential solution to this problem. Today, the company launched a beta of Agent Kit, a new way for humans to prove they are directing their AI agents and for websites to limit access to AI agents working on behalf of an actual human.
If you recognize the name World, it’s probably as the organization behind WorldCoin, the Sam Altman-founded cryptocurrency outfit that launched in 2023 alongside an offer to give free WorldCoin to anyone who scanned their iris in a physical “orb”. While WorldCoin still exists (at a current value well below its early 2024 peaks), World has now pivoted to focus on World ID, which uses the same iris-scanning technology as the basis for a cryptographically secure, unique online identity token stored on your phone.
World now claims nearly 18 million unique humans have verified their identities on one of nearly 1,000 physical orbs around the world. Now, with Agent Kit, World wants to let those users tie their confirmed identity to any AI agent, letting it work on their behalf across the Internet in a way other parties can trust.
Who are you working for?
Rather than blocking automated traffic outright as a safety or data-protection measure, World suggests sites could instead require AI agents to present an associated World ID token to prove they represent an actual human who’s behind any request. In this way, the site could allow agents to access limited resources like restaurant reservations, ticket purchase opportunities, free trials, or even bandwidth without worrying about a single user flooding the process with thousands of anonymous bots. The same idea could apply to sensitive reputational systems like online forums and polls, where it’s important to prevent automated astroturfing or dogpiling.
The Agent Kit system is built atop the x402 protocol, which was built with CloudFlare and Coinbase support. In recent months, World says some sites have been using that protocol to let AI agents “prove” their authenticity by making micropayments, which can act as a “rate limiter” for bad actors. But while a sufficiently motivated attacker could simply pay to help a coordinated agent swarm get around those micropayment limits, that attacker would theoretically be unable to provide each of their agents with a unique World ID to establish its fake humanity.
The trick, of course, is getting a critical mass of people who use AI agents (or who use the Internet more generally) to get their iris scanned for a World ID in the first place. And while World says about 18,000 new users have confirmed their identities this way in the last week, it will be hard to raise that adoption rate without a killer app that requires such an onerous level of one-time biometric verification.
Until then, the chicken-and-egg problem of assigning a unique identity to every online human—and by extension to the agents they might deploy—will remain elusive. Still, at least we now have a framework for AI agent authenticity that’s just a few billion iris scans from being truly workable.







