Wafer Overfits to User Routines
Wafer
This reveals that Wafer is not trying to build a general assistant that works a little bit everywhere, it is trying to lock in a few repeated actions until they work almost every time. The product watches the exact taps and screens a person uses, then narrows automation to those proven paths, like opening Spotify, typing an artist name, and pressing play. That is a different reliability strategy from assistants that depend on app developers exposing limited hooks or on broad computer use models that must improvise each step.
-
The key tradeoff is breadth versus hit rate. Wafer records actions the user already performs and fine tunes around those patterns, because multi step agents fail quickly when each step is only mostly reliable. In practice, that means automating the boring repeats first, not every possible task.
-
This is only possible because Wafer sits below the app layer. Standard Android assistant integrations depend on developers declaring capabilities and intents in advance, so the assistant can launch a screen or trigger a supported action, but only for functions the app owner chose to expose.
-
The closest comparison is computer use. Anthropic describes it as screenshot based mouse and keyboard control across any interface, but also warns it is still beta and can be slow. Wafer is narrower, but that narrowness is the point, because a personalized path can outperform a general model on the same repeated workflow.
The next step is a phone workflow that behaves more like autocomplete for actions than a chatbot for commands. As Wafer captures more repeated routines across music, messaging, travel, and scheduling, the winning assistants will be the ones that turn a person’s own habits into reusable automation primitives, then expand outward from those dependable building blocks.