Context Engineering Over Chat
Levi Lian, CEO of Raycaster, on why vertical AI is workflows first & chat last
The durable product in vertical AI is not the chat box, it is the company specific operating layer that tells the model what documents matter, what good looks like, and who must approve the result. In Raycaster’s case, that means the agent is not free forming across long files. It is checking real pharma objects against templates, acceptance rules, and permissions, then pointing to the exact page, proposing an edit, and sending it into the existing review chain.
-
Chat with agent loops breaks when the model has to infer hidden org structure from prompts alone. Raycaster instead loads repositories, schemas, review roles, and tool plans up front, which turns the task from open ended conversation into bounded document work with traceable outputs.
-
This matches the broader shift in vertical AI. Harvey moved from model centric positioning toward packaged legal workflows and high touch deployment, while Hebbia emphasizes orchestration for hybrid workflows where humans stay in control. The winning pattern is workflow control surfaces, not generic chat.
-
In regulated life sciences, this matters more because the cost of a bad edit is concrete. A protocol change can ripple into consent forms, analysis plans, records, and submission modules, creating rework, amendments, and review delays. Context engineering is what keeps those linked artifacts aligned.
The next step is that context layers become the system of intelligence that sits on top of systems of record like Veeva and IQVIA. As base models keep improving, advantage shifts even further toward whoever owns the templates, traces, evaluator feedback, and approval logic that make AI outputs reliable enough to ship into real workflows.