OS-level AI for cross-device context
Sam Hall, CEO of Wafer, on AI agent form factors
The real bet is that the winning AI product may not be a single device, but the software layer that can read context across the system and then adapt its output to whatever surface is available. That is why Wafer is building below the app layer. A phone, glasses, or an audio only device all have different screens and controls, but the core job is the same, understand what matters right now, then present or act on it in the form the hardware can support.
-
This approach treats apps less like destinations and more like back ends. Instead of opening Uber, Lyft, Spotify, Gmail, and Calendar one by one, the OS can watch patterns across them, compare options, and surface a next step. That only works if the system can see beyond each app sandbox.
-
It is a different path from assistant replacements like Perplexity on Android. Those assistants can call actions that developers expose through AppIntents, but they are limited to the hooks apps choose to provide. A full OS fork aims for broader read access and more complete context.
-
The broader market is moving toward owning the interface layer. Perplexity is pushing into an agentic browser, and Manus combined browser control, research, and integrations into a consumer agent. Wafer is pushing the same logic one level deeper, into the operating system itself and eventually across new hardware shells.
If AI shifts computing from tapping through app menus to receiving the right answer or action in the right modality, the advantage will move toward whoever controls the context layer. That sets up a race where browsers, assistants, and operating systems all converge, and the most durable winner is likely the one that can travel across phones, glasses, audio, and yet to be invented devices.