Sacra Logo Sign In

Edo Liberty, founder and CEO of Pinecone, on the companies indexed on OpenAI

Jan-Erik Asplund
None

Background

Edo Liberty is the founder and CEO of Pinecone. We talked to him to understand more about the applications of machine learning across ecommerce, social, and search, what new foundational models from OpenAI are enabling on the application side, and the emerging "stack" for working with LLMs.

Questions

  1. Let’s start just with you telling us a little bit about how Pinecone got started and what made you decide to start the company.
  2. What did that advancement on the foundation model level mean for Pinecone and for Pinecone customers, and how has it influenced the creation of a ML ‘stack’?
  3. With OpenAI turning on its ChatGPT API, how do you think about the potential competitive dynamics there?
  4. I can choose to either build my own recommendation algorithm using Pinecone, or I can look for one of these off-the-shelf SaaS tools that will give me a recommendation algorithm as a service. How do you think about that build vs. buy question?
  5. Who do you see as the primary user of Pinecone, or person that it's operated by on the SaaS side? And you said machine learning or data scientists, but if you can get any more specific around that, it'd be great to understand—do you think about it like Pinecone empowers people to build stuff that's way more sophisticated than they could build otherwise?
  6. I'm curious if there are specific—out of the ones on your website—if there are specific kinds of use cases that you've seen growing really quickly or getting really, really good traction or if it's relatively evenly distributed across anomaly detection search?
  7. Do you envision a future where it's somehow split? Some companies, I'm not sure if it makes sense to say bigger companies, use Pinecone, whereas maybe there is a market for 80% applications on top that solves specific problems. How do you think about the future there?
  8. What is your five-year vision, if everything goes the way you want it to go, what does Pinecone look like?

Interview

Let’s start just with you telling us a little bit about how Pinecone got started and what made you decide to start the company.

At the end of the day, it's very simple. What Pinecone does—vector databases—has been a foundational capability in big companies for a very long time. 

From image search, to semantic search in text, to anomaly detection, to security and fraud detection—and I was personally involved in building these things at Yahoo and at AWS. I have friends and colleagues who work on the same things at Facebook and Google and so on. I myself work in this field, I've been a scientist and engineer working on these kinds of problems for a very long time. 

So it's very natural to me, and it’s been something I’ve wanted to do for a long time, but it never quite reached the tipping point in terms of mindshare and developing an awareness of it, that the average developer knew what it was. It was obvious to me that the value for tens of thousands of amazing applications is there—and by the way, these are not small applications—feed ranking at Facebook, text search at Google, or shopping recommendation at Amazon, they're all based on vector search. They're all based on something like Pinecone. These are not small applications, these are the cash cows at the biggest companies, driven by AI with this kind of information.

Nevertheless, the mindshare in the market wasn't there yet. And then, with foundational models, with language models, with vision models, creating these vector embeddings, creating these numeric representations of text and images and so on, suddenly that really sprung into the shared mindset of pretty much every developer. Now, of course, with everything that's happening with OpenAI and all that stuff, however much we thought we were already on steroids, now we're on double dosage.

What did that advancement on the foundation model level mean for Pinecone and for Pinecone customers, and how has it influenced the creation of a ML ‘stack’?

A stack—that's exactly how people use it. 

People use OpenAI—and other great machine learning and language models like Hugging Face, like Cohere—but people really see OpenAI and Pinecone as peanut butter and jelly. They're like “Oh, this is the natural combo.” 

They’ll go, they will take language, text or documents or paragraphs or search queries and so on, pass them through OpenAI's models, get embeddings, store those embeddings in Pinecone, query those by similarity, by relevance and so on.

People build what's called retrieval augmented generation (RAG). So instead of using a Q&A and searching for an answer, they would use Pinecone to retrieve the top most relevant semantic answers and then use generative AI models to synthesize the results into one answer. Suddenly, people can build a ChatGPT kind of thing on their own data—and it's liberating. 

When I was in grade school, you had to summarize some topic, and back then, Google wasn't a thing, so you had to go and pick up five books and then you'd summarize them. People are now doing the same thing with AI. They search and then they find 10 relevant answers and they synthesize them into actual text.

So this ability to freely move between text and images and the numeric representations, store them in Pinecone, search them, retrieve, annotate, have the model, have access to hundreds of millions or billions of actual records or actual embeddings of these documents, and building applications with those two components ends up being incredibly powerful.

And again, people do all sorts of crazy stuff with it, really building these amazing, amazing applications. Just the creativity that comes out of it, it's unreal. It's kind of crazy, because we have, well, I don't want to share exact numbers, but we've basically tripled the number of new customers per day that onboard Pinecone. And I don't think I've met any two customers that do exactly the same thing. It's kind of nutty, a lot of semantic search, a lot of question answering and so on, but it's always with a twist. They always have their own flavor, they have their own data, they have their own model, they have some business logic on top of it.

With OpenAI turning on its ChatGPT API, how do you think about the potential competitive dynamics there?

I don’t see them as competitors at all. I’m a huge believer in building great components and liberating developers to build amazing applications. What OpenAI is doing is phenomenal and more power to them. They're making the models available as APIs and very low code integration points—with God knows how many developers—and that's great. 

What Pinecone does is something very, very, very different. It takes the output of those models, the embeddings, the vectors, the numeric representations of those objects so that you can run search, so that you can run anomaly detection so that you can run the de-duplications, so that you can find context and find answers to questions and so on. It's a search capability, it's a database capability. We don't train models at all. We care very deeply about reducing cost and improving efficiency, reducing latencies, improving high availability, making sure we have no data loss. It's a database. It's a managed database. It's a completely different object.

I can choose to either build my own recommendation algorithm using Pinecone, or I can look for one of these off-the-shelf SaaS tools that will give me a recommendation algorithm as a service. How do you think about that build vs. buy question?

It's not really a “versus”. 

First of all, a lot of these solutions that you think of as verticalized or as out-of-the-box solutions for a specific anomaly detection, or similarity search, or semantic search, use Pinecone under the hood. You might not know that, but in the same way that they use whatever, like EC2 and whatever, they use a lot of infrastructure. Oftentimes, again, I see a lot of articles online, it's like, ‘oh, the 10 hottest companies in semantic search.’ And you read that and—oh, half of them are our customers. I have absolutely no problem that we’re not mentioned there, because we don't play in that space. We’re an infrastructure. I'm perfectly happy partnering with them, and them being our customers and building amazing solutions for their own customers.

And again, power to them, if they're doing the right thing for their customers and they're building great products? Great, amazing, they should be paid for it. They should build a great and successful company, we're not competing with them at all. I will tell you that a lot of customers and a lot of people that come to us, come to us because they graduate from the vertical solution. They had some whatever anomaly detection vendor, and at some point the company grew and they have more resources and then so on, and they figure out, ‘hey, maybe if we hire a data scientist and a machine learning engineer, maybe we can build something better because we know stuff about our use case that others don't.’

Then, they build directly on top of Pinecone and don't go to a managed, verticalized out-of-the-box solution. Because with the out-of-the-box solution, the amazing thing is that it gives you say, 80% utility out-of-the-box, which is amazing, but it stays at exactly 80%. There's nothing you can do to make it 81%. When you build your own, you start at 20% or at zero percent, but there's no limit. You can get to 100%. It'll be more work, but you're fenced in.

Who do you see as the primary user of Pinecone, or person that it's operated by on the SaaS side? And you said machine learning or data scientists, but if you can get any more specific around that, it'd be great to understand—do you think about it like Pinecone empowers people to build stuff that's way more sophisticated than they could build otherwise?

The answer is two different kinds of motions are happening simultaneously. 

One, is there really is more talent out there. There is more demand than there is more talent. And so if you're hiring, it's still hard to hire. It's still very hard to find great machine learning engineers and machine learning platform engineers and so on. They're hard to find, but there are still a lot more of them. So machine learning engineer is the number three or four, I forget, I think maybe it's number four, fastest growing profession in 2022 based on a LinkedIn survey. And by the way, this isn't in tech, this is in general. Number one is vaccine specialist.

This is, in general, one of the fastest growing professions in the world. So yes, there's even more demand, so hiring is not easy, but the market is maturing. You'll see a lot more people who know how to operate this machinery in the market, and you'll see more of that every year going forward. 

The second thing is that the ecosystem is maturing. The fact that something like Pinecone exists means that you don't have to build it. The fact that OpenAI opens the models as APIs means you can call an API. You don't have to train things, you don't have to collect data, you don't have to have people that know how to train machine learning models.

Both the bar is getting lower and the talent is getting higher. So a lot of companies, even if they don't feel like it's their forte or exactly the thing they need to throw, I don't know, 50 headcounts on, are oftentimes very delighted to figure out that with a relatively small amount of effort, they can get pretty far today. Because the tool chain has improved a lot. They can put together a few managed services, maybe with one or two people working for the quarter, they can already build some value to their own customers. Maybe they validated for themselves that this is really kick ass and they want to triple down on it, or maybe they don't and they build something else. But it becomes something a lot more accessible and available to a very large number of companies.

I'm curious if there are specific—out of the ones on your website—if there are specific kinds of use cases that you've seen growing really quickly or getting really, really good traction or if it's relatively evenly distributed across anomaly detection search?

Text search and semantic search specifically are very common, question answering, recommendation, deduplication, image search and image, again to de-duplication and recommendation. It's hard to say, also, because I don't know who the audience is, but even text search or semantic text search is not one thing. Texts like tweets and emails and receipts and Jira tickets are all text, and they're used in a different way, they mean different things. You search for them differently, you care about different things when you consume them. If you are building Twitter or building Atlassian or building Yahoo Mail, you have a lot of text, but you care about very different things. And all of them will be building on Pinecone in different ways.

Yes, they are big focus areas, but there's a huge variability within each one. Because again, every application ends up being something very different. And by the way, that’s why there’s a play for the horizontal platform, because it's very different. If I had to choose, if I had to build a kind of out-of-the-box semantic text solution, I would have to choose. Does it work for receipts or tweets or Jira tickets? And it can't do all of them, because you have to choose what relevance even means.

Do you envision a future where it's somehow split? Some companies, I'm not sure if it makes sense to say bigger companies, use Pinecone, whereas maybe there is a market for 80% applications on top that solves specific problems. How do you think about the future there?

First of all, you hinted at some split between large and small companies—we don't see that split at all. You have some young, relatively small or medium-sized companies being very aggressive and putting in a ton of energy in this space and vice versa. Sometimes you see ginormous companies that have unlimited resources being very defensive and doing the absolute minimal step forward just to keep the appearance that they care about it.

The size of the company doesn't matter in our experience. What does matter is how serious they take it and how important it is to them. More than that, again, because Pinecone exists, because the ecosystem is maturing, it's also becoming a lot more accessible. So as a company, you didn't have to put a third of your headcount on something to go and figure it out. You just had to procure some capacity to run something. It's not even that expensive. You have to spend $100K on services and on computers and usage costs on a bunch of services to figure out whether you can build an amazing application. Yeah, that's money, but you're not going to change the whole company's strategy to go figure it out. It's something that you do in a quarter to figure out whether it's worth doubling down on next quarter. That's becoming a lot more common and people feel a lot more comfortable with it.

So, the split between small and big companies, again, I don't think it's a thing. I think seriousness and ability to really just take a real swing at it is a much better indication of how successful somebody's going to be. That's one.

I see almost the opposite, to be honest. Small companies that we work with tend to be very innovative. When small companies tend to be relatively new companies. They're kind of AI happy, data happy, cloud first, data, digital natives. This all comes very naturally to them. So for them to take an enterprise-y canned solution to do something feels very unnatural. But for them to cobble together three cloud services and build some amazing thing, it feels very normal.

If anything, it's larger companies that tend to be sometimes more risk averse and say, “Hey, why don't we just pay some vendor to do it?” And oftentimes, it's more because the managers and the VPs are the old guard and they have the same mentality. Well, oftentimes when we talk to the developers in the engineering level, they say, “We would want to build this because it'll be much better and easier and more fun, and frankly, a lot cheaper, but hey, that's what the bigwigs decided.’” Unfortunately, I can't help them with those kinds of internal politics.

What is your five-year vision, if everything goes the way you want it to go, what does Pinecone look like?

To be honest, if you asked me two years ago, I would describe to you how we are today, even though I thought it would take us five years to get to where we are today. It only took two, maybe even one depending on where you start counting. So it's very hard for me to predict. I can answer it in two different ways. First of all, in the market, or at least in the AI space, we see that there are a handful of foundational capabilities. This is something like a vector database, something like a model hosting solution, something that does model training, there's a handful of those things, orchestration, monitoring and so on. So there's maybe six or seven different core components.

I really want to make sure that we build the absolute best vector database in the world, no questions asked. The fastest, cheapest, best developer experience, start today, go to production tomorrow, enterprise ready, the obvious choice, where you'd have to find a really good justification to use anything other than Pinecone. We are already pretty close to that, but we're going to keep working at it, and we're going to make it even better, even faster, even cheaper, even more enterprise ready and so on. 

As a company, we want to keep staying in a center of innovation and cutting edge, both engineering and science. I tell people that engineering and science in Pinecone is like cooking in a Michelin starred restaurant. It's like, you're not cooking for yourself at home, you're not even cooking in a regular restaurant. The bar is extremely high because our own consumers are kind of the food critics of the software industry. They themselves are systems developers, systems engineers, machine learning engineers. They know their work. They're very smart, they're very demanding, and we develop for them. They come to our restaurant, they are the one consuming our product. So I want to keep pushing that bar even higher, even faster, better, more accurate, more stable, more performant, everything.

Disclaimers

This transcript is for information purposes only and does not constitute advice of any type or trade recommendation and should not form the basis of any investment decision. Sacra accepts no liability for the transcript or for any errors, omissions or inaccuracies in respect of it. The views of the experts expressed in the transcript are those of the experts and they are not endorsed by, nor do they represent the opinion of Sacra. Sacra reserves all copyright, intellectual property rights in the transcript. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any transcript is strictly prohibited.

Read more from

Read more from

Vena revenue, growth, and valuation

lightningbolt_icon Unlocked Report
Continue Reading

Maven Clinic revenue, growth, and valuation

lightningbolt_icon Unlocked Report
Continue Reading

Epic revenue, growth, and valuation

lightningbolt_icon Unlocked Report
Continue Reading