Matic's Camera-First Navigation
Matic
Choosing vision over lidar is really a bet that the hardest part of home cleaning is understanding what is on the floor, not just knowing where the walls are. Lidar is good at measuring room shape quickly, but cameras let the robot tell apart hardwood from rug, spot cables and spills, and decide in real time whether to mop, vacuum harder, or steer around an object. Matic pairs that richer scene understanding with on device compute, so it can map and react without sending household imagery to the cloud.
-
Most robot vacuums now mix sensors. Premium models from Dreame, Ecovacs, and newer Roombas commonly use lidar for fast mapping plus cameras for obstacle recognition. Matic is more extreme, it uses multiple RGB and infrared cameras as the primary sensing stack, which pushes more of the problem into software and on device AI.
-
That matters because a home floor is full of small, messy edge cases that simple distance sensing misses. A lidar scan can see that something is there, but a vision system can classify whether it is a cord, a threshold, a wet spill, or a patch of carpet, then change suction or lift the mop before contact.
-
The tradeoff is engineering difficulty. Camera first navigation needs enough processing power to build a 3D map from images and still make split second driving decisions in changing light and clutter. Matic built around Nvidia Jetson Orin and says the raw camera data is processed locally and discarded, which turns privacy into part of the product advantage, not just a compliance feature.
The category is moving toward richer perception, not less. As robot vacuums take on more mixed flooring, tighter spaces, and more expensive obstacle avoidance promises, the winners will look less like simple mapping machines and more like small autonomous robots that can recognize a room the way a person does, while keeping that computation cheap and local.