From Crowdwork to Expert Marketplaces

Diving deeper into

Joe Kim, CEO of Office Hours, on the end of crowdwork

Interview
We've moved past the crowd-work style of data labeling and annotation to improve model effectiveness.
Analyzed 3 sources

This shift turns data labeling from a labor supply problem into a credentialed talent marketplace. Early model training could use large pools of cheaper workers to tag images or text, but post training for reasoning models now needs people who can judge whether an answer is legally sound, clinically accurate, or financially useful. That favors networks like Office Hours that are built to find, verify, schedule, and retain hard to reach experts, not just route tasks to a crowd.

  • The practical difference is in the task itself. A crowd worker can mark whether an image contains a cat. A banking operator, physician, or lawyer is needed to tell whether a model handled a niche workflow correctly, asked the right follow up question, or made a dangerous mistake in context.
  • Comparable platforms show where demand is going. Handshake used its university and PhD network to launch Handshake AI, reaching an estimated $80M annualized revenue in 8 months, while Mercor, Invisible, and Prolific all grew by supplying higher skill human feedback for frontier labs.
  • Office Hours is structurally closer to this new demand than a classic annotation vendor. Its core product is already search, matching, compliance, scheduling, and trust around verified experts, and the company frames AI training as another way to monetize specialized knowledge alongside expert calls, user research, and mentorship.

The next step is that expert networks and recruiting marketplaces will keep blending into AI infrastructure. As labs push models into medicine, law, finance, and enterprise workflows, the winning platforms will be the ones that can reliably deliver verified experts, capture their judgments in usable formats, and keep those experts engaged for repeat work.