Prolific bets on human judgment
Jemma White, COO of Prolific, on why humans ensure AI safety
Prolific’s bet is that the scarce input in AI is moving from raw labor and narrow expertise toward real human judgment. The company started as an academic research platform built to recruit reliable study participants fast, then reused that same matching, screening, and profiling system for AI work. That matters because models now need people who can express cultural nuance, emotional range, and trustworthy reactions, not just complete annotation tasks at scale.
-
Prolific’s origin shapes the product. Researchers choose participant traits, sample size, and pay level in a self serve workflow, then the platform matches from a long lived vetted pool. That is very different from a recruiting firm or BPO stitching together workers for each project.
-
The market has moved in stages. Mechanical Turk style crowdwork won on cost. Then reasoning models created demand for PhDs and other credentialed experts, helping companies like Handshake and Mercor grow. Prolific is positioned for the next step because it profiles behavior, preferences, languages, and personality traits, not just resumes.
-
In practice, this means AI teams can recruit for things like resilience in trust and safety work, multilingual fluency, or specific cultural context, then run studies through API or self serve tools and get results back quickly. That is especially useful for red teaming, evals, localization, and product testing for global AI apps.
Over the next few years, human data vendors will separate into three layers, commodity labor, credentialed expert networks, and human context platforms. Prolific is pushing toward the third layer. If regulation and global product deployment keep raising the value of auditable human judgment, that layer should capture the highest quality demand and the strongest margins.