Who Controls the Default Assistant

Diving deeper into

Sam Hall, CEO of Wafer, on AI agent form factors

Interview
If they're stuck with whatever Google puts in Android, they might not have anything better than Apple Intelligence in two years.
Analyzed 5 sources

This reveals the real battleground is not AI features by themselves, but who controls the default assistant layer on the phone. Samsung already ships Gemini inside One UI and can wire it into Samsung apps, but that still leaves Google controlling much of the underlying AI stack. Apple, by contrast, owns the chip, OS, assistant, and app hooks together, which makes it easier to turn AI into a native system behavior instead of an add on.

  • Wafer is arguing that Android OEMs have a dependency problem, not just a model problem. If Samsung relies on Google for the assistant, the cross app actions, and the upgrade path, its AI experience risks looking like a branded layer on top of Google rather than a truly differentiated product.
  • The product constraint is concrete. App based assistants only do what developers expose through app intents or similar APIs. An OS fork can watch how people move between apps, understand more on screen context, and automate actions without waiting for Uber, Spotify, or Outlook to build explicit assistant support.
  • There is precedent for software first differentiation becoming hardware leverage. Xiaomi began with a custom Android ROM before turning that software identity into a phone business. Wafer wants the same sequence, start with enthusiasts installing a new mobile OS, then use that demand to sell OEMs a day one AI experience.

The next phase is a fight over whether Android OEMs keep accepting Google led AI, or build their own system layer before Apple extends its lead. If AI becomes the main way people navigate phones, the companies that own the assistant, the context, and the action loop will own the user relationship, and everyone else will be reduced to hardware or app supply.