Web Bot Auth as Infrastructure
Reed McGinley-Stempel, CEO of Stytch, on authentication for AI agents
The strategic point is that agent identity is more likely to become an infrastructure layer than a single vendor product. If Web Bot Auth is adopted by multiple edge networks and browsers, the hard part shifts from spotting bots by guesswork to reading a shared cryptographic credential. That makes agent traffic easier to classify across the web, and lets Stytch plug the same standard into both self identification for good agents and server side detection for evasive ones.
-
Today, most agent detection is still brittle. Stytch describes its first isAgent version as fingerprinting known agents like Browserbase, OpenAI, and Anthropic, which can create false positives. Web Bot Auth replaces that with a signed proof that the agent controls a key tied to its identity.
-
The reason openness matters is distribution. Cloudflare documents signed agents as requiring Web Bot Auth, but the underlying idea is not tied to Cloudflare IP lists or a proprietary browser. Cloudflare has also framed the protocol as a move toward broader registries and third party adoption, which is what makes Akamai or Google plausible adopters.
-
This fits Stytch's broader product strategy. Stytch is building both delegated login infrastructure for apps that need agents to act on behalf of users, and fraud controls for sites that need to separate good automation from hostile scraping. An open agent identity standard strengthens both products at once.
The next step is a web where agents identify themselves the way apps already use OAuth to identify delegated access. As more infrastructure providers recognize the same signatures, websites will stop treating all automation as suspicious by default. That should expand the market for agent aware identity, consent, and authorization tools, and give platforms like Stytch more leverage as the control plane sitting behind those flows.