OS Level AI Agent SDK

Diving deeper into

Sam Hall, CEO of Wafer, on AI agent form factors

Interview
the Wafer product - or whoever wins this space - will become an internal SDK that app developers can leverage.
Analyzed 6 sources

The winning AI phone layer would sit between apps and users, turning apps into service endpoints instead of destinations. In practice, that means developers would stop designing every task around opening a full screen app, and instead expose data and actions to an OS level agent that decides when to answer automatically, when to ask for confirmation, and which app to call in the background.

  • That is a much deeper role than todays assistant hooks. Apple and Android both let developers wire limited actions into assistants through App Intents or App Actions, but those flows depend on developers predeclaring what the assistant can do. Wafer is describing a layer that can reason across many apps and user history, not just fire approved shortcuts.
  • The closest analog is infrastructure that disappears into other products. Granola moved closer to the OS so it could listen through the microphone without living inside Zoom, and Perplexity Assistant can trigger multi app actions on Android, but both still operate with narrower system access than a full OS fork.
  • If this model works, app design shifts toward back end economics. LinkedIn, Uber, Spotify, and others would compete to be the best data source or action provider for the agent layer, because the agent would choose which service to call. The consumer facing advantage moves from prettier UI to better inventory, metadata, pricing, and reliability.

Over the next few years, the important battle is not app versus no app. It is whether assistants stay as thin wrappers over developer provided intents, or become the primary decision layer that routes work across apps. If an OS level agent wins, the strongest apps will be the ones that make themselves easy for that layer to read from and write to.