Always-On Audio AI Glasses
Sesame AI
AI glasses matter because they turn voice from an occasional app into a default interface that can be used in the tiny gaps of the day. A phone requires a deliberate unlock and screen check. Glasses sit on the face, keep microphones and speakers ready, and make it easier to ask for directions, reminders, summaries, or quick answers while walking, shopping, or commuting. That shift can increase both frequency and habit strength far beyond software used only when someone decides to open an app.
-
The closest proof point is Meta. Its Ray-Ban line pushed smart glasses into a real consumer category, and newer models are explicitly sold around getting help, messages, translations, and AI responses without reaching for a phone. That validates the basic behavior change Sesame is targeting, even though Sesame is taking an audio first path instead of leading with cameras and displays.
-
Audio first glasses can avoid the biggest social tax in wearable AI. Camera glasses raise immediate privacy concerns because other people may feel recorded. A voice focused device with open ear audio and conversational AI keeps the core benefit, fast access to an assistant, without making every interaction look like filming. That makes all day wear more plausible.
-
This also changes Sesame's business model. If the company only licenses voice models, it gets paid when another product uses them. If it sells the device too, it can capture hardware revenue, bundle subscriptions, and tune latency, wake words, battery life, and on device processing around one specific use case, natural back and forth conversation throughout the day.
The next step is a contest over who owns ambient computing time. The winners will be the companies that make AI assistance feel effortless enough to use dozens of times a day, while keeping the device comfortable, socially acceptable, and useful without a screen. If Sesame executes, glasses can become the highest frequency home for its conversational model.