Workflow Context Is the Moat
Wade Foster, co-founder & CEO of Zapier, on AI agent orchestration
The strategic point is that winning AI workflows depends less on picking the smartest model, and more on controlling everything around it. In practice, the durable value sits in gathering the right data from tools like Gong and Salesforce, shaping that into usable context, calling the model at the right moment, and then routing the output into the next system or approval step. That is why Zapier frames the model as one component inside a larger automation graph, not the product by itself.
-
Zapier describes enterprise agent reliability as a workflow design problem. A team can move data deterministically across apps, then use an LLM only for the fuzzy step, like summarizing a transcript, scoring churn risk, or drafting a case study. That lowers cost and error rates versus letting an agent decide every step.
-
This is also why orchestration is more granular than model routing. The work includes fetching records, enriching leads, pulling prompts, cleaning payloads, trimming outputs, and setting human checkpoints. Zapier had already been building this layer in its natural language actions work, where it translated plain text into API calls and compressed machine output into model friendly results.
-
Competitors split on where they start. Bardeen begins from a text command in the browser and builds automations around what the user is viewing. n8n emphasizes a low code canvas that can connect any API or database. Zapier’s edge is the long built interoperability layer across thousands of apps, plus governance and reusable playbooks for repeat business workflows.
The market is moving toward software where models are interchangeable, but workflow context, permissions, and downstream actions are the moat. That favors platforms that can turn scattered company systems into reliable multi step automations, with the LLM acting as a reasoning module inside a much bigger operating loop.