Copyleaks partnership added trust layer
David Park, CEO and co-founder of Jenni AI, on prosumer generative AI apps post-ChatGPT
The Copyleaks partnership shows that Jenni was not just selling better writing, it was selling proof that the writing was safe to submit. Early academic users were less worried about style than about being accused of copying, so Jenni added a visible trust layer. That mattered because LLM text is usually new text, not copied text, which means the product problem was user confidence and academic legitimacy, not classic plagiarism detection.
-
Jenni was built for students and researchers, with features like autocomplete that appears when typing stops, plus citations and a research library. In that workflow, a plagiarism check sits naturally at the end, right before submission, as reassurance that the draft will not trip a school integrity tool.
-
This also highlights the difference between plagiarism detection and AI detection. Copyleaks built both. Its plagiarism products look for copied or closely rewritten text across the web and academic sources, while newer AI detection products try to infer whether a passage was machine written even when it is original.
-
Once ChatGPT taught mainstream users that a model could generate fluent original prose, Jenni no longer needed to spend product space proving the text was not copied. The trust burden moved from originality to usefulness, which is why retention and daily workflow features became more important than a checker widget.
Going forward, academic AI products will compete less on showing that text is not copied and more on fitting cleanly into real study workflows. The winners will be tools that help a student find papers, draft with citations, revise in place, and leave an audit trail that feels acceptable to schools, not just tools that generate paragraphs.