Zapier's Risk-Aware AI Automation
Mike Knoop, co-founder of Zapier, on Zapier's LLM-powered future
The key product decision is that AI automation cannot be treated as one risk level. Zapier is turning language models into a controllable layer on top of workflows, where low stakes tasks can run automatically, but higher stakes steps get guardrails like fixed fields, previews, and draft mode. That lets natural language expand what users can automate without asking them to trust the model equally for every action.
-
In Natural Language Actions, Zapier lets users lock specific parameters instead of letting the model guess them. In practice, that means an LLM might be allowed to write a Slack message, but not choose the Slack channel, because the cost of a wrong guess is much higher on routing than on wording.
-
This is also why Zapier emphasizes preview and draft style workflows. The same Gmail integration can either send an email or create a draft. That distinction turns one model capability into two products, one for speed, and one for review, depending on how expensive a mistake would be in the real workflow.
-
The broader competitive implication is that reliability becomes an orchestration problem, not just a model quality problem. Zapier's newer AI strategy keeps deterministic workflow steps around the LLM, adds human approval where needed, and uses access controls and scoped actions as part of the product, which is a different posture from pure chat based assistants.
This points toward automation systems that look less like one shot chat commands and more like managed coworkers. The winning products will decide which steps can be flexible, which must be locked down, and where humans should approve. That favors platforms like Zapier that already know how to mix AI judgment with deterministic workflow logic and governance.