Ramp's Speed versus Quality Framework
Geoff Charles, VP of Product at Ramp, on Ramp's AI flywheel
This framework shows that Ramp treats models like interchangeable infrastructure, not the product itself. In practice, the product decision is whether a user is waiting on an answer right now, like a support reply or transaction classification, or whether Ramp can spend extra seconds and dollars for a harder job like pulling renewal terms from a contract. That lets Ramp ship AI into many finance workflows without forcing every task onto one slow, expensive model.
-
Ramp already splits work across GPT-4, Claude, and local fine-tuned models. GPT-4 is used where output quality matters most, Claude sits in the middle, and local models handle simpler, high-volume jobs where speed and cost matter more than nuance.
-
This matches how Ramp’s product works. A finance team does not want to open a chatbot and ask twenty questions. They want receipts categorized, contract terms extracted, and approvals flagged inside the workflow, with only the uncertain edge cases pushed to a human.
-
The strategic payoff is broader than model optionality. Ramp is using AI to turn card, invoice, receipt, and contract data into one operating layer for spend control, which helps it expand from cards into bill pay, procurement, and enterprise expense software alongside Brex, Bill.com, and Concur.
Going forward, the winners in finance AI will be the companies that route each task to the right cost and latency tier while collecting proprietary workflow data. Ramp’s speed versus quality framework points toward a future where most finance work is quietly pre-processed by specialized models, and humans step in only for the few decisions where judgment still matters most.