CEO at AI procurement startup on Supabase's compliance path and operational DX

Jan-Erik Asplund
View PDF

Background

We spoke with the CEO of a public sector SaaS startup that has used Supabase since founding and now serves government agencies with strict cybersecurity requirements.

The conversation covers how Supabase's managed offering meets compliance frameworks like CMMC 2 and FedRAMP while maintaining a self-hosting escape valve, how the browser-based interface creates operational leverage for non-technical executives, and why price elasticity extends to absorbing 10x increases before considering migration.

Key points via Sacra AI:

  • Public sector compliance requirements (CMMC 2, NIST 800-171, FedRAMP, NIST 853) are met by Supabase's managed offering for most government customers, with self-hosting providing an escape valve for the subset of agencies requiring full on-premises deployment—enabling startups to adopt Supabase as SaaS and migrate specific customers to isolated instances without replacing their entire stack. "CMMC 2, NIST 800-171, FedRAMP, and NIST 853... When you look at those requirements, does Supabase's managed cloud offering meet what you need today? It works great for us... Another scenario would be if we have to move to an on-prem deployment where Supabase auth is not compatible with our end users' infrastructure. But I'm not aware of any looming limitations or deadlines we have to manage at this point... For these instances, it will be completely separate... On the operational side, do you feel like Supabase makes that easier or harder compared to what you'd expect if you were running something more bespoke? Easier."
  • The browser-based data editing interface creates operational leverage beyond developer productivity by enabling non-technical executives to directly debug customer issues, make production data edits, and handle account-level troubleshooting without requiring engineering support or command-line database access. "I can also access the data in-browser across different workspaces if I need to make specific edits or debug things for a customer. It's very fast, very responsive. I'm very happy with it as a user... From your own seat doing light product and engineering work, does Supabase lower the barrier for you to safely interact with production data and systems, or do you mostly stay out of it anyway? It does... I can make changes in the browser, as I mentioned before. That makes it easier for me versus having to stand up a computer-based interface."
  • Price elasticity in the public sector context extends to absorbing up to 10x cost increases before considering migration, suggesting that switching costs and operational integration create enough lock-in that price becomes a secondary concern once a startup has standardized on Supabase across development, staging, and production environments with automatic backups and local instance provisioning. "If Supabase announced a meaningful price increase tomorrow—say, materially higher database or auth costs—would that change your posture at all? Or is the switching cost high enough that you'd likely absorb it unless it was extreme? We'd probably absorb it for now... Probably 10x... It's great for our development process. We can spin up instances on local machines for when people are coding, so we can test with data. We can have versions of it in our staging environment that are production-like for people to point to and test on, and then we have production as well. If we need to back up data from Supabase, they store instances automatically, automatically encrypted at AES-256, which we can flash and restore at any point if needed. It's incredibly easy."

Add to Conversation

Tell me a bit about your professional background and what kind of work you do.

Questions

  1. Tell me a bit about your professional background and what kind of work you do.
  2. How does Supabase fit into that picture? Is it something you're using at Hazel right now or more for your own projects?
  3. Roughly when did the team first start using Supabase?
  4. How did Supabase actually come into the picture for Hazel? Was there a formal evaluation of alternatives like Firebase or raw Postgres, or was it more of an organic choice by the founding team?
  5. Which specific part of Supabase was the initial hook for the team?
  6. Which specific Supabase products are you actually using day to day at Hazel? Is it just the database and auth, or are things like storage, edge functions, and real time also in the mix?
  7. How central is the auth piece compared to the database for your workflow?
  8. Have you had any experience with the real time features or pgvector for any of the AI work you're doing at Hazel?
  9. Do you have visibility into whether you're on a paid plan or the free tier? What does billing look like?
  10. Do you recall what specifically triggered the move to paid? Was it hitting a certain limit on database size or users, or was it more about needing specific production features?
  11. Now that you're paying for it, how does the pricing feel overall? Have there been any surprises in the bill as your data or user base has grown?
  12. What would you say is the real anchor that keeps Hazel on Supabase? Is it the convenience of having everything in one place, or is it that migrating away would just be too painful at this stage?
  13. How painful would that migration actually be—are we talking a few days of engineering time, or a months-long project that would essentially halt feature development?
  14. Does the fact that Supabase is built on open source Postgres give the team any peace of mind regarding lock-in? Or does it feel like the other layers—like auth and storage—make it a proprietary platform?
  15. Does that mean you've had to look into self-hosting Supabase to meet specific security or data residency requirements? Or is the managed cloud version sufficient for now?
  16. Does the team feel confident that Supabase's open source architecture will make that self-hosting transition smooth? Or is there any apprehension about the complexity of managing all those moving parts yourselves?
  17. Are you using AI coding tools like Cursor, Bolt, or Lovable in your workflow? If so, does Supabase feel well supported by those tools?
  18. Have you noticed if the speed of iteration has changed how you use Supabase? For example, are you spinning up more tables or edge functions because the AI makes it so easy to scaffold them?
  19. Have you run into any messiness or scaling issues where the database schema is growing faster than the team can manually oversee it?
  20. What were those other scaling issues, and did they ever make you question whether Supabase was the right place to be anchoring the rest of the stack?
  21. If another YC company asked you whether to use Supabase for a new project, what would your advice be?
  22. What do you think is the biggest risk for a company like Hazel building on Supabase for the long term? Is there anything that would actually make you consider switching, or is it purely a matter of price or reliability?
  23. Do you recall whether those minor engineering concerns were related to the auth layer?
  24. How do you handle data isolation between your customers? Is it all in one big database, or do you have separate databases for each customer?
  25. How has the experience been managing those row-level security policies as you've scaled? Has it stayed manageable, or has it started to feel like a complex tangle of permissions to maintain?

Interview

I currently work at an AI procurement startup that serves the United States public sector. Before that, I worked in management consulting in the aerospace and defense and healthcare industries. My educational background is in electrical engineering—I did my bachelor's in electrical engineering at Harvard. I have some software engineering experience and do product and light engineering work, but I'm not a core engineer in my job.

How does Supabase fit into that picture? Is it something you're using at Hazel right now or more for your own projects?

It's something we use at Hazel right now. It is the core data infrastructure that underlies our product. We use it for authentication, for storing all of our platform data, and for storing all of our customer data as well.

Roughly when did the team first start using Supabase?

We began using it in the fall of 2024—around October or November of 2024.

How did Supabase actually come into the picture for Hazel? Was there a formal evaluation of alternatives like Firebase or raw Postgres, or was it more of an organic choice by the founding team?

It was an organic choice. Supabase is one of the most highly rated deals on Y Combinator's internal Bookface page. We knew we needed a solution that did those things, so we picked Supabase.

Which specific part of Supabase was the initial hook for the team?

The Firebase alternative.

Which specific Supabase products are you actually using day to day at Hazel? Is it just the database and auth, or are things like storage, edge functions, and real time also in the mix?

Storage, PostgreSQL, and auth.

How central is the auth piece compared to the database for your workflow?

Both are equally central to what we do.

Have you had any experience with the real time features or pgvector for any of the AI work you're doing at Hazel?

We used some of it, but we ended up not using it entirely. We used an Amazon tool for embeddings or vectorization—I forget which. For AI embeddings, we explored using Supabase but instead deferred to entirely using commercial LLM calls. That said, I believe we have since moved away from purely API-based vectorized embeddings with third-party providers, and we might now be storing vectorized elements within our own Supabase database. How exactly that works, I'm not sure.

Do you have visibility into whether you're on a paid plan or the free tier? What does billing look like?

We're on a paid plan. I believe we're billed monthly for our usage.

Do you recall what specifically triggered the move to paid? Was it hitting a certain limit on database size or users, or was it more about needing specific production features?

I think it was our credits expiring.

Now that you're paying for it, how does the pricing feel overall? Have there been any surprises in the bill as your data or user base has grown?

It's fine. It's not something that causes me any anxiety right now. Of course, we have a lot of back-end costs throughout what we're doing, but I haven't seen anything that is a cause for concern.

What would you say is the real anchor that keeps Hazel on Supabase? Is it the convenience of having everything in one place, or is it that migrating away would just be too painful at this stage?

It's both. The product simply works, and it works very, very well. It's very easy to use, very sustainable, and very scalable. We have no reason to switch, and if we did, I'm sure it would be a pain in the ass.

How painful would that migration actually be—are we talking a few days of engineering time, or a months-long project that would essentially halt feature development?

We're talking months.

Does the fact that Supabase is built on open source Postgres give the team any peace of mind regarding lock-in? Or does it feel like the other layers—like auth and storage—make it a proprietary platform?

It's funny—because of some of the work we do, we can't use closed-source solutions anyway, given some of our government contracts.

Does that mean you've had to look into self-hosting Supabase to meet specific security or data residency requirements? Or is the managed cloud version sufficient for now?

We use managed cloud for now. We will eventually have to self-host it for some specific customer needs.

Does the team feel confident that Supabase's open source architecture will make that self-hosting transition smooth? Or is there any apprehension about the complexity of managing all those moving parts yourselves?

We're comfortable doing it. It's not going to be easier with any other solution, frankly, so I don't see why Supabase would be a problem.

Are you using AI coding tools like Cursor, Bolt, or Lovable in your workflow? If so, does Supabase feel well supported by those tools?

Yes, we use Cursor. Yes, it is well supported.

Have you noticed if the speed of iteration has changed how you use Supabase? For example, are you spinning up more tables or edge functions because the AI makes it so easy to scaffold them?

That is absolutely true. We generate far more tables and more data infrastructure because the cost of doing so in terms of development time is so low.

Have you run into any messiness or scaling issues where the database schema is growing faster than the team can manually oversee it?

No. We haven't had any database schema issues. We've had scaling issues with other aspects of our solution, but not with the database.

What were those other scaling issues, and did they ever make you question whether Supabase was the right place to be anchoring the rest of the stack?

The other scaling issues were infrastructure-related—specific to how we build our microservices architecture and the way we handle different features like document processing and third-party LLM calls. At no point was Supabase an issue.

If another YC company asked you whether to use Supabase for a new project, what would your advice be?

Absolutely, no questions asked—use it.

What do you think is the biggest risk for a company like Hazel building on Supabase for the long term? Is there anything that would actually make you consider switching, or is it purely a matter of price or reliability?

Price and reliability, and nothing has made me believe we should change. I have heard some engineers mention in the past that there may be some specific configurations that can be a pain, but they were minor issues that I'm sure are part of any hosting or database solution. So the answer is no—it's great.

Do you recall whether those minor engineering concerns were related to the auth layer?

I think it was the auth solution—specifically how Supabase auth works. I don't remember exactly what it was, but I know we have looked into building our own auth solution or using a dedicated provider.

How do you handle data isolation between your customers? Is it all in one big database, or do you have separate databases for each customer?

We do it using row-level security.

How has the experience been managing those row-level security policies as you've scaled? Has it stayed manageable, or has it started to feel like a complex tangle of permissions to maintain?

It's been decently scalable. We've had to invest in it intermittently to make sure the infrastructure stack is up to par for our customer needs, but it's worked.

Disclaimers

This transcript is for information purposes only and does not constitute advice of any type or trade recommendation and should not form the basis of any investment decision. Sacra accepts no liability for the transcript or for any errors, omissions or inaccuracies in respect of it. The views of the experts expressed in the transcript are those of the experts and they are not endorsed by, nor do they represent the opinion of Sacra. Sacra reserves all copyright, intellectual property rights in the transcript. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any transcript is strictly prohibited.

Read more from

Read more from