Agents Need Universal Action Coverage

Diving deeper into

Sam Hall, CEO of Wafer, on AI agent form factors

Interview
That's what happened with Siri - it could only set alarms, so people didn't use it for other tasks.
Analyzed 7 sources

The real failure mode for consumer assistants is not intelligence, it is narrow coverage. Once people learn that an assistant only works in a few safe, prewired cases, like alarms, one ride app, or one email provider, they stop testing its boundaries and fall back to doing everything manually. That is why broad action coverage matters as much as raw model quality for any agent trying to become a daily interface layer.

  • Siri and Apple Intelligence depend on App Intents, which means developers must explicitly expose actions for the system to use. In practice, that creates a patchwork assistant, where some apps support a task and many do not, so the user cannot build a stable habit around it.
  • This is the same wedge used by OS level products like Granola and Wafer. Instead of waiting for every app to publish the right hooks, they sit closer to the microphone, screen, or operating system and learn repeated workflows directly, which can turn a user action like open Spotify, search, and play into a reusable pattern.
  • The broader market is moving toward agents that can act across interfaces, not just answer questions. OpenAI describes Operator and Computer Use as systems for navigating multi step tasks on a computer, which shows where the category is going, but even these systems still call out reliability and task scope limits as the core bottleneck.

The winners in AI assistants will be the products that make action taking feel universal, not occasional. As more agents move below the app layer and learn from actual user behavior, apps start to look less like destinations and more like back end services that the assistant calls on demand.