Lower-stack Agents Trade Distribution for Signals

Diving deeper into

Sam Hall, CEO of Wafer, on AI agent form factors

Interview
Companies are weighing the trade-off of distribution sacrifices they make by going lower in the stack versus the advantages in data access
Analyzed 4 sources

The real prize in AI agents is not a prettier app, it is privileged access to the raw signals that reveal intent before a user types a command. Going lower in the stack gives a product direct visibility into microphone activity, screen contents, app launches, calendar context, and repeated workflows, which makes prediction and automation better. The cost is that these products lose the easy installation and built in traffic that app stores, browsers, and standard SaaS apps provide.

  • Granola is a concrete example of the trade. Instead of living as a Zoom bot or browser tab, it runs on the desktop, watches for mic activity and meeting context, then turns any Zoom, Meet, or Teams call into transcripts, summaries, and action items. That deeper placement lets it capture meetings across platforms, but it requires users to install a desktop app first.
  • Rewind, now Limitless, made the same bet from another angle. Its product records screen and conversation data so the system can reconstruct what happened across the workday. That kind of memory product gets stronger with broader device level access, but it is harder to distribute than a normal web app and has pushed the company toward dedicated hardware and software rather than a lightweight browser workflow.
  • Perplexity shows the opposite strategy. It first won distribution with a standard consumer app and search product, then moved toward deeper control with an Android assistant and later an agentic browser. That path keeps user acquisition easier, but the product is still constrained by what browsers, assistants, and app developers expose, which is why full OS control remains strategically attractive.

This is heading toward a split market. High distribution agents will spread fastest as browsers, assistants, and chat apps, while the products with the best context and action quality will keep moving into the OS, the desktop, and eventually dedicated hardware. Over time, the winning AI interfaces are likely to look less like standalone apps and more like thin surfaces on top of a system that already sees everything important.