Prolific built for human measurement

Diving deeper into

Jemma White, COO of Prolific, on why humans ensure AI safety

Interview
Prolific’s origins and PMF are very different from companies that started as recruitment firms or labor marketplaces and later pivoted into AI training.
Analyzed 6 sources

Prolific’s edge is that it was built as a measurement system for human quality, not as a labor pool that later found an AI buyer. That shows up in the product. Customers can self serve, filter for thousands of traits, see pay transparently, plug into APIs, and get responses in minutes from a participant base profiled over years. That is a very different engine from firms built around recruiter ops, contractor volume, or managed service delivery.

  • The original workflow was academic research, where bad samples ruin the result. That forced Prolific to solve participant verification, matching, fraud detection, screening, and speed from day one. AI demand arrived later, but it fit the same core job, finding the right humans fast and producing trustworthy data.
  • A labor marketplace usually wins by filling seats. Prolific instead wins by reducing search and setup time for a researcher or model team. The customer chooses exact audience slices, launches a study directly, and often gets data back the same day, without long scoping or heavy services layers.
  • That makes Prolific structurally different from adjacent players. Invisible began as a virtual assistant service before moving into RLHF. Scale grew around large scale labeling and managed workflows. Prolific is closer to an API driven participant graph, where the asset is longitudinal profile depth and response quality, not just worker supply.

The market is moving toward more specialized evals, cultural nuance, trust and safety, and ongoing model monitoring. That shift favors platforms with deep participant metadata and fast self serve tooling. As AI buyers need external validation more often, the companies built around precision matching and auditable human data should capture a larger share of post training and evaluation spend.