OS-Level Assistants Enable Proactivity
Wafer
The core advantage is not better chat, it is better position in the stack. An app based assistant can only react to what a user types or what another app explicitly exposes through an API, while an OS level product can watch patterns across apps, calendars, messages, notifications, and prior actions, then surface the next likely task before the user asks. That is what makes the experience feel proactive instead of turn by turn.
-
In practice, this means an integrated assistant can notice a calendar event with an address, compare ride options across Uber and Lyft, and tee up the best choice. An app level assistant usually cannot see both apps’ live state or act unless each app has already granted a narrow hook for that action.
-
Perplexity shows the trade off clearly. It scaled quickly as an app, reaching $63M ARR by the end of 2024 and later $148M annualized revenue by June 2025, but its assistant layer still depends on developer provided intents. That creates a limited menu of actions rather than a full view of the phone.
-
This is also why Google and Apple have structural constraints. Their app store ecosystems rely on developers keeping users inside apps, so a truly cross app assistant risks bypassing the very interfaces and economics those platforms were built around. Startups forking Android or replacing the assistant can push further because they are not protecting that incumbent model.
The next battleground is moving from answer engines to system level agents. As assistants shift lower in the stack, apps increasingly look like back end services that provide inventory, identity, and transactions, while the operating system or browser becomes the layer that decides what to show, when to act, and which app actually gets used.