Human Data Retention Enables AI Safety
Jemma White, COO of Prolific, on why humans ensure AI safety
This market looks durable only when a platform keeps enough take rate after paying workers, while giving workers enough repeat work and fair enough pay to stay. In human data, headline revenue often includes the money passed through to contractors, so it can make low quality labor arbitrage look stronger than it is. The stronger businesses are the ones with retained participant pools, transparent payouts, and software that lets each project clear fast without constant resupply costs.
-
Prolific is built to optimize net economics, not just volume. Researchers set participant criteria, sample size, and pay, Prolific matches from 200,000 active vetted participants, and participants receive about 70% of researcher payments. That structure makes marketplace health visible in a way opaque managed services models often do not.
-
The comparison set shows why gross revenue can mislead. Handshake AI is estimated at 25 to 40% gross margins because 60 to 70% of payouts go to contractors, while its software heavy job board runs near 80% gross margins. Scale can report far larger revenue because it bundles labor and software, but it still pays contractors every time work is done.
-
Supply-side retention is the hidden asset because human data quality compounds over time. Prolific keeps a vetted pool, long term performance data, anti fraud systems, and a 2 million person waitlist, which helps it fill studies quickly without rebuilding the workforce for each job. That lowers churn risk on both sides of the marketplace.
As AI work shifts from generic labeling to safety checks, cultural nuance, and regulated evaluation, the winners will look less like outsourced labor shops and more like trusted labor marketplaces with software margins. That favors platforms that can keep participants engaged for years, prove data quality, and turn contractor spend into repeatable net revenue.