Policy-fit moderation for communities
Oscar Beijbom, co-founder and CTO of Nyckel, on the opportunites in the AI/ML tooling market
The real product here is not moderation accuracy in the abstract, it is policy fit for a specific community. A dating app needs a model that can tell the difference between flirting, harassment, scams, and explicit content using that app's own norms, not the safer and more generic thresholds built into broad moderation APIs. That is why Nyckel asks customers to bring their own examples, label them, and then deploy a custom classifier directly into production.
-
Nyckel is built so non ML experts can do the setup themselves. In the interview, customers upload their own text or images, label roughly 10 to 100 examples, see cross validated results on their own data in seconds, and then push the model live. The content moderator or product manager, not an ML engineer, defines the standard.
-
That workflow matters most in edge cases like LGBTQ dating, where a general moderation model may overblock normal user behavior or miss community specific scams. Nyckel's broader pitch is that domain shift breaks generic models, so every serious classification task should be tested and tuned on the customer's own data before production use.
-
This positions Nyckel differently from heavier MLOps vendors like Scale AI, DataRobot, and Dataiku. Those products cover larger parts of the ML stack for expert teams, while Nyckel is trying to collapse labeling, training, validation, and deployment into one fast workflow for startup CTOs and product owners who just need a working classifier inside an app.
The direction of travel is toward custom models becoming a normal app feature, not a special ML project. As foundation models improve, the winning products in moderation and other classification tasks will be the ones that let operators encode their own rules quickly, retrain on fresh edge cases, and ship updates without building a full internal ML team.