Search for AI Courses, Tech News and, Blogs

World Unveils System to Confirm Human Identity Behind AI Shoppers

by Greg Rubino | 2 weeks ago | 9 min read

In a move that could reshape how automated shopping works on the internet, identity startup World has launched a new verification tool, AgentKit, to prove there is a real human behind AI shopping agents making purchases online. The beta rollout comes as “agentic commerce” where AI bots browse, compare and buy on a user’s behalf  starts shifting from experimental demos to mainstream e‑commerce and payments platforms.

What World has launched

World, backed by Tools for Humanity and co-founded by OpenAI chief Sam Altman, has unveiled AgentKit as a software development kit (SDK) that e‑commerce sites can plug into their existing systems. The core promise: give websites a way to verify that a unique, real person authorized an AI agent’s actions, without breaking the speed and convenience that make automation attractive in the first place.

According to the company, AgentKit is built on top of World ID, the project’s personhood credential that aims to guarantee “one human, one ID” on the internet. The highest-assurance version of this ID is generated using World’s Orb, a spherical biometric device that scans a user’s iris and converts it into a unique, encrypted code that World says cannot be reverse-engineered back into the original biometric data. That code becomes the verified World ID, which users can then connect to their AI agents via the World app and associated services.

How AgentKit works for online merchants

AgentKit is designed to fit into the growing x402 protocol ecosystem, an open standard co-developed by Coinbase and Cloudflare to let software agents transact online. Merchants that already support x402 can add a “proof-of-humanity” check alongside or instead of existing mechanisms such as micropayments or traditional fraud filters.​

In practice, when an AI agent tries to add an item to a cart, apply a coupon code or complete a checkout, the site can request a one-time approval signal linked to a specific, verified human rather than just a device or account. Retailers can define their own policies: for example, they may require human verification for high-value orders, limited-edition drops, bulk purchases or suspiciously fast repeat transactions, while allowing low-risk, everyday orders to go through automatically.​

“AgentKit gives merchants an on-ramp to ‘proof of human’ checks without breaking automated flows,” one technical explainer on the launch notes, highlighting that the goal is to reduce bot-driven purchases, coupon abuse and synthetic-identity fraud while preserving conversion rates. World says its approach is privacy-preserving, claiming that raw biometric data from the Orb is not stored and that merchants only see a cryptographic confirmation that a unique human approved the transaction, not the underlying personal information.

Why AI shopping agents created a new risk

The launch lands at a time when AI shopping agents are rapidly moving into the mainstream, raising new questions about fraud, liability and consumer protection. Recent industry research cited by risk and supply-chain analysts suggests that around 73% of shoppers now use some form of AI in their purchasing journeys, and about 70% say they are comfortable letting AI agents make purchases on their behalf.

At the same time, these tools blur traditional lines of responsibility. If an AI agent, rather than the human shopper, “visits” a website, fills the cart and clicks “buy,” it is less clear who should bear the cost when something goes wrong. “Liability remains unclear when AI agents complete transactions: merchants may bear fraud costs despite shoppers never visiting their websites,” one analysis warns, highlighting payment security and privacy as top consumer concerns.​

Global organizations, including financial networks and standards bodies, have started sounding the alarm on what they call “agentic commerce” highly automated, goal-driven transactions executed by AI. The World Economic Forum, for instance, has pointed out that different platforms are taking divergent approaches to identity, consent logging and disclosure at checkout, while players like Visa and Google experiment with their own protocols to attest that a purchase really represents a recognized customer.

World’s pitch: “power of attorney” for AI agents

Tools for Humanity chief product officer Tiago Sada frames AgentKit as a way for users to formally authorize their AI agents to act in specific contexts, while giving merchants a clear signal they can evaluate. In an interview about the launch, Sada compared the feature to a legal arrangement many consumers already understand. “It’s sort of like delegating ‘power of attorney’ to an agent,” he said, arguing that the system lets websites see that the agent is genuinely acting on behalf of an identified individual.

“What the World ID badge tells you is that someone is a real and a unique human,” Sada added. Website operators, he stressed, still maintain discretion to block or flag particular users they believe are acting in bad faith, but they no longer have to guess whether an AI agent represents a real person or an army of bots.

World is positioning itself as a foundational identity layer for this emerging ecosystem, betting that as more automated agents start making financial decisions, regulators, merchants and consumers will demand stronger guarantees about who is actually behind them. The company argues that cryptographic proof of personhood rather than just passwords, device fingerprints or payment details will become a baseline requirement for trust in agentic commerce.

Industry momentum behind agentic commerce

AgentKit’s debut follows a series of moves by major tech and payments companies to embrace AI-driven shopping and transaction agents. Over the past year, large e‑commerce platforms and financial services providers have rolled out features that let AI-driven assistants place orders, negotiate discounts and manage subscriptions without requiring users to manually visit websites each time.

Reports around the sector note that big players like Amazon and Mastercard have already begun integrating automated purchasing capabilities into parts of their offerings, while Google has been experimenting with protocols to support agent-based transactions as a first-class use case on the web. Open and closed standards are being proposed in parallel, with some schemes focusing on verifying payment credentials and others, like World’s, focusing specifically on proving that a transaction can be traced back to a unique person.

Security experts say that as the proportion of transactions initiated by software agents rises, existing fraud tools built around browser fingerprints, IP addresses and manual reviews will come under strain. “Traditional approaches monitoring individual transactions prove insufficient when AI agents execute purchases without direct merchant interaction,” one fraud intelligence report warns, urging organizations to adopt new authentication standards, monitoring capabilities and liability frameworks tailored to AI-driven commerce.

Privacy and controversy: the Orb question

While World is leaning heavily on privacy assurances, its biometric approach has already sparked intense debate. The Orb hardware, which scans a user’s iris to generate a World ID, has previously drawn criticism from digital-rights advocates and regulators who worry that any large-scale biometric system could become a surveillance tool or an attractive target for attackers.

World has repeatedly insisted that its system is designed to avoid that outcome, emphasizing that it does not store raw iris images and instead keeps only an encrypted mathematical representation used to prove uniqueness. In its technical materials, the company argues that verifications can be done in a way that reveals nothing beyond the fact that “this is a real, unique person who has not been enrolled before.”

Still, critics remain wary of any solution that asks users to link biometric-based credentials to financial transactions, even indirectly. Privacy groups have previously raised questions about how consent is gathered, what happens if credentials are compromised, and how such an ID might be used in jurisdictions with weaker democratic safeguards. The rollout of AgentKit is likely to revive those debates as merchants weigh the benefits of stronger “proof-of-humanity” signals against the reputational risks of relying on a controversial identity provider.

What it means for shoppers and merchants

For everyday shoppers, the rise of tools like AgentKit could remain largely invisible, operating behind the scenes in the interfaces of AI assistants and e‑commerce sites. Many users may simply see occasional prompts on their phones or in their apps asking them to approve a purchase initiated by their AI shopping agent, or to complete a one-time verification linking their World ID to a preferred assistant.

Merchants, by contrast, will have to make active decisions about how aggressively to adopt proof-of-human systems and how to configure them across different risk tiers. Fraud teams are being urged to take a leadership role, developing policies that combine identity verification, behavioral monitoring and clear liability definitions for agent-driven transactions. “Fraud prevention teams represent natural leaders for agentic commerce safety because they're positioned to see both opportunity and risk,” one industry report notes, arguing that the goal should be to “build guardrails enabling confident adoption” rather than to slow innovation.

If AgentKit or similar tools gain traction, the next phase of online shopping could be one where autonomous agents do most of the work  but are always tethered, cryptographically and legally, to a verifiable human identity. Whether consumers and regulators accept World’s biometric-heavy model as the right foundation for that future remains one of the biggest open questions in the story that has just begun.