Lambda customer at Iambic Therapeutics on GPU infrastructure choices for ML training and inference

Background
We spoke with a machine learning leader at Iambic Therapeutics who manages significant GPU infrastructure across Lambda Labs for training and AWS for inference workloads.
The conversation explores their decision-making around GPU provider selection, highlighting how Lambda's pricing advantages, flexibility, and startup-friendly culture have made them a preferred partner for high-performance training clusters despite the evolving cloud GPU landscape.
Key points via Sacra AI:
- Lambda Labs wins on price and flexibility for training workloads, while AWS provides reliability for inference despite higher costs. "For AWS, it was simply not possible at that time to get the quality of interconnect that we wanted... Meanwhile, with CoreWeave and Lambda, both of them were open to that option and were able to spec out for us the price of doing that, even though that wasn't their plan initially. That flexibility was necessary in order for us to sign any contract at all because we needed this InfiniBand. But also, surprisingly, it was cheaper to go with Lambda and CoreWeave at the time."
- The GPU cloud market splits between NeoClouds (Lambda, CoreWeave) offering lower prices and more configurability versus traditional hyperscalers (AWS, GCP, Oracle) with enterprise-grade but expensive infrastructure. "I think the first thing to do is to break GPU providers into two groups. One is going to be the NeoClouds providers who often will be more raw. They will be younger businesses, have less infrastructure, but possibly way more GPUs available. The prices will be cheaper... The other class is the traditional providers: AWS, GCP, Azure, Oracle... These providers will give you much higher prices and want to sign much bigger and longer-term contracts, but they offer a more enterprise, more robust, more fully-fledged experience."
- There's an untapped opportunity for a "DigitalOcean for machine learning" that combines affordable GPUs with researcher-friendly tooling. "I'm still of the opinion that no one has done a good job of solving the DigitalOcean for machine learning researcher problem. There really is a big advantage to a cloud that can offer a much more pleasant developer experience on top of access to cheap GPUs... I think Lambda providing something more like DigitalOcean but for GPUs is a thing that they are in an opportune place to do."
Questions
- Which cloud GPU providers do you currently work with most frequently?
- At a high level, what kinds of workloads are you running on those platforms? And what would you estimate is your monthly GPU spend across them?
- How does your team think about the decision process around where to run training versus inference specifically? What are the technical or economic factors that tend to push training to a provider like Lambda and inference to AWS in your case?
- What would you say are the key ways Lambda differentiates itself from other GPU providers? What stands out about them in particular?
- How sticky do you find GPU platforms to be? In your view, what makes a provider like Lambda successful at keeping your business over time? Especially since GPU hours are arguably fungible.
- Have you seen Lambda make specific changes or improvements based on your team's feedback or needs?
- To what extent do you think that kind of flexibility is sustainable as Lambda scales, or is that a benefit that only early or midstage customers might enjoy?
- Aside from the price and flexibility, what role does ease of use or DevEx play when choosing a GPU platform like Lambda or AWS? How much does that weigh in your decision making?
- In your view, how commoditized is cloud GPU infrastructure today? Are most of your decisions driven primarily by price and specs? Or are there meaningful differences between providers beyond the hardware?
- Do you think a company like Lambda is in a good position to evolve in that direction? Layer on more developer friendly tools? Or do you think that's more likely to come from a different type of company altogether?
- Have you ever encountered availability issues with Lambda in practice? For example, difficulty accessing certain types of GPUs at certain times, or limits on how many nodes you could spin up. How did that impact you if at all?
- Earlier, you mentioned Together AI and others trying to move higher up the stack. Some of those companies lease GPU capacity rather than own it, and they tend to charge higher prices but offer more in terms of dev experience, APIs, and inference tooling. In your view, is the distinction between GPU infrastructure providers like Lambda that own their hardware and others like Together that lease and build software a meaningful one? And how do you think that plays into how each competes or evolves over time?
- Looking ahead a few years, how do you see the cloud GPU space evolving? Are there trends or changes on the horizon you're watching that might reshape how you or others in the ecosystem approach training and inference?
- Anything else you'd add that we haven't touched on yet you think is important for us or for investors to understand about the cloud GPU infrastructure space or where it's headed?
- You mentioned earlier that your team looked at both Lambda and CoreWeave pretty closely before signing. Could you walk me through a bit more of that evaluation process? What were some of the key technical or business differences you noticed between Lambda and CoreWeave as you compared them side by side?
- How much weight did you place on customer support or responsiveness from the team? Did Lambda stand out in any way on that dimension?
- Are there any specific workloads you see emerging in inference that you think will drive GPU demand going forward? Maybe things that are underappreciated right now, but could be big?
Interview
Which cloud GPU providers do you currently work with most frequently?
Lambda Labs and AWS.
At a high level, what kinds of workloads are you running on those platforms? And what would you estimate is your monthly GPU spend across them?
On Lambda Labs, we primarily do our training workloads. This is where we'll have reserved GPUs that are networked together with high-quality InfiniBand connections and do single, large, parallel training calculations in a fully synchronous and scheduled way ahead of time.
The type of GPUs we use for this would be things like the A100, the H100, and the B200. These need to be symmetric hardware typically, so all the GPUs being used in a training run need to be identical to each other. The GPU machines need to be very highly performant, especially for parallel networks. This typically requires the GPUs to be built out to the NVIDIA reference architectures, such as the HGX architecture for the H100 GPUs. On Lambda Labs, our typical cloud spend could range between $500,000 and a million dollars per month.
Our second use is AWS, where we do most of our inference. Our inference workloads can be spiky in the sense that they can be on-demand where users expect low latency time to first outputs. Or they could be very large high-throughput batch jobs where users expect that they can send a large number of jobs in and the cloud will take care of them efficiently.
In this case, the type of GPU we use is closely specialized to the model in order to save cost. We're effectively going to use the smallest possible GPU that can fit the model in VRAM and for which the model can be used efficiently. We find that the most effective GPUs in this usage will be typically things built on at least the Ampere architecture. The A10G GPU, as deployed in G5 instances, is one really shining example of the workforce of most of what we do.
The other GPU that we tend to use from AWS will be the L40S GPU as deployed in the G6E family of instances. I believe this is built on the Hopper architecture. The primary usage for this particular GPU is especially for cases where we need larger memory. Memory on the L40S is twice that of the A10G, 48 gigabytes versus 24.
In terms of monthly cloud spend, this is going to vary quite a lot, but it probably averages between $50,000 and $100,000 a month, but it could spike up as high as $200,000-300,000 in some months if we have a particular inference workload of large importance.
How does your team think about the decision process around where to run training versus inference specifically? What are the technical or economic factors that tend to push training to a provider like Lambda and inference to AWS in your case?
The critical thing necessary for training is going to be the availability of high-quality machines networked together, built to a reference architecture, and reserved ahead of time. On-demand support for very high-quality clusters is not broadly available. There are certain companies going after this—Lambda Labs is one of them with their one-click cluster idea, as well as companies like SF Compute Co. But it's proven very difficult for people to really deliver on the promise of this.
AWS themselves have an offering where you can schedule high-quality networked GPUs in advance, but their prices are very high, and the number that you can schedule and the amount of time you have to wait is quite large. So in order to really reduce barriers toward experimentation on model development, as well as just getting the large production training runs done, we're focused on signing contracts with providers that can provide us reserved time where we'll have 24/7 access to a cluster of a certain fixed size for a certain duration of time. We typically want to sign these contracts for 18 months or longer.
Once that's all put in place and we're convinced that the architecture suits our needs, the SLAs and security are sufficient, then what we're looking for is the lowest possible price. Providers like Lambda, or maybe CoreWeave or some of the NeoClouds beyond them, tend to be substantially cheaper on a per GPU hour basis than the traditional hyperscaler clouds like AWS, GCP, or Oracle. It can be as much as a factor of 2—like $2 for an H100 hour versus $4 for an H100 hour for reference.
On the inference side, we're a little bit less economically sensitive because the amount of inference that we do is actually small compared to training. That's different from the typical model of OpenAI or Anthropic or someone that has an enormous number of users. We have relatively few users and relatively high-value models.
For us, what matters a lot is having fully reliable, secure, programmable, mature cloud infrastructure for inference, and that's where AWS comes in. AWS offers a very complete, mature, and reliable set of offerings—from data storage with S3, EC2 for compute, various other bits and bobs that you can wire together, as well as third-party services like Coiled that allow you to put all these pieces together and build high-uptime, high-reliability services that serve models.
We're a little bit less sensitive to the economics there. We're willing to pay the AWS tax in order to basically be able to get an A10G GPU on demand anytime and expect it to just spin up without any issues.
What would you say are the key ways Lambda differentiates itself from other GPU providers? What stands out about them in particular?
I think the first thing to do is to break GPU providers into two groups. One is going to be the NeoClouds providers who often will be more raw. They will be younger businesses, have less infrastructure, but possibly way more GPUs available. The prices will be cheaper. Lambda is a member of this class.
The other class is the traditional providers: AWS, GCP, Azure, Oracle. NVIDIA has kind of danced in and out of the idea that they might offer this type of offering as well. These providers will give you much higher prices and want to sign much bigger and longer-term contracts, but they offer a more enterprise, more robust, more fully-fledged experience.
For us, the draw of Lambda really first is the low prices. And second, along with the other NeoClouds, because they're young, they offer a little bit more configurability.
Early on, when we were trying to scope out which company we were going to purchase a cluster from in late 2023, we had four offers. Two from traditional hyperscalers, AWS and Oracle in particular, and two from NeoClouds, Lambda and CoreWeave.
What we had learned at the time, but maybe hadn't fully percolated through the market, was that the quality of the GPU interconnect between the GPUs—your InfiniBand type of thing—was of critical importance to the success of training our models.
For AWS, it was simply not possible at that time to get the quality of interconnect that we wanted. AWS had developed their own thing called Elastic Fabric Adapter, and it was just not up to spec. Some new version of it probably was going to be, but it wasn't at that time. Similarly, Oracle had caught on to this idea, but they hadn't fully deployed the high enough quality InfiniBand. So even if we were willing to pay the much higher prices of those contracts, it just was not possible to get this level of interconnect rolled out in the time of that contract.
Meanwhile, with CoreWeave and Lambda, both of them were open to that option and were able to spec out for us the price of doing that, even though that wasn't their plan initially. That flexibility was necessary in order for us to sign any contract at all because we needed this InfiniBand. But also, surprisingly, it was cheaper to go with Lambda and CoreWeave at the time.
How sticky do you find GPU platforms to be? In your view, what makes a provider like Lambda successful at keeping your business over time? Especially since GPU hours are arguably fungible.
It's an interesting question. In some sense, there's the old adage that "I like my house more than the others because it's where my stuff is." So there are always switching costs even if everything looks extremely fungible.
There have been attempts to really make commoditized GPU hours. There are things like the San Francisco Compute Company marketplace. And while those look like they're upcoming, they're not quite there yet.
One part of what makes Lambda sticky is that once you commit to anything, you have to start to build your system around their system. And to some extent, they being a smaller company are able to react and build their system around your needs as well. So that creates a little bit of lock-in.
I would say the other thing is that Lambda does a good job of staying basically at market price for a NeoCloud. So it's not really worth paying switching costs because no one is able to offer us sufficiently cheaper GPUs that it's worth it for us to switch. If someone can offer us a few pennies per hour off of GPU cost, that's not worth it because it just slows us down. There's no need to move house over such a small difference.
Have you seen Lambda make specific changes or improvements based on your team's feedback or needs?
Yes. With Lambda, we have a Kubernetes setup which requires the installation of custom pieces of Kubernetes to keep it going, and they've been very helpful with that.
The second thing is we have with them a custom setup for the way that our storage is managed. I think this was done in a somewhat larger way by them, but we really love how it's been done. And for our first cluster, we were able to get it fully air-gapped when we were more worried about security.
They've been willing to flex and meet us on various things, perhaps because they're a smaller company, especially at the time we first started working with them. It's been very nice and collaborative.
I'd say the other thing is there's definitely a kind of small company, startup type of vibe to their company culture. Things like they're available to us on Slack, and things are generally informal and conversational and don't feel like an adversarial contract negotiation whenever you want to change something or the terms of the deal or ask for a favor. All of those things really add up and make it so much more frictionless and easier to work together.
To what extent do you think that kind of flexibility is sustainable as Lambda scales, or is that a benefit that only early or midstage customers might enjoy?
I'm unsure. I actually don't really know the rate at which Lambda is scaling. I have seen with later compute that we purchased from them that it has become more standardized—more one-size-fits-all.
We're not too bothered by that, frankly, because they have improved the quality of their offering. The new thing they offer, which I think at the time they called one-click clusters (they might still call it that or might have changed the name), their platform of private cloud offered via Kubernetes, via their one-click cluster system, with cloud-attached storage—this has all actually been very nice.
Aside from the price and flexibility, what role does ease of use or DevEx play when choosing a GPU platform like Lambda or AWS? How much does that weigh in your decision making?
This is going to vary a lot by the maturity of the team and what they are trying to accomplish. There are two profiles that are really important. One is the profile of the machine learning researcher, and the other is the profile of the software machine learning deployer.
For a researcher, what they really want is something that looks like a queuing system and a pile of GPUs that are networked together, with the ability to basically just SSH into that thing. Traditionally, this might be like a Slurm cluster with a head node, some network attached storage, and a bunch of compute nodes each with GPUs.
Another way this can be orchestrated is as a Kubernetes cluster, where the compute nodes act as the node pool of the Kubernetes cluster. The storage comes in the way of being able to request PVCs either from the cloud or against network attached storage.
But the abstraction there is that you have a queue, you have some definition of the job, you have the ability to put your data and your code somewhere on the cluster or in a Docker container. You submit those things together, some results get written somewhere often on the cluster, and then you react to that. It's very manual and not necessarily super automatable or reproducible, but you get a lot of control over the hardware, and it's really easy to get into the system and start working.
On the other hand, for people trying to deploy models and build robust systems that can respond to user requests and never go down, you want something that looks more like AWS. Where you have, for example, Elastic Kubernetes Service, where you can serve node pools off of EC2, where you can always get an instance, where you never lose data on S3. Where you can manage your infrastructure using things like Terraform so that you have infrastructure written down as code, and things are trackable, versionable, and copy-pasteable. Basically, no one is clicking buttons to make things happen—everything is written down as code that can be proper software engineered.
We initially thought about the possibility of having a Kubernetes cluster on our Lambda Labs research cluster that would serve both for training and deployment purposes, with the idea that you could do both in Kubernetes. We did not get there. We did bifurcate into Lambda for training and AWS for deployment.
In your view, how commoditized is cloud GPU infrastructure today? Are most of your decisions driven primarily by price and specs? Or are there meaningful differences between providers beyond the hardware?
I think it is increasingly becoming the case that this will be commoditized. Price and specs are going to be the first thing that anyone looks at and thinks about.
But there are opportunities that I haven't seen fully taken advantage of. There are opportunities for providers to provide better training cloud experiences than are currently on offer.
By analogy, DigitalOcean was a really interesting thing because what it did was take a cloud service like AWS and create a version of it with only the core features and a much more pleasant developer experience. And then on top of that, they could charge a little bit more for that or at least get market share from AWS.
I think there's an opportunity for that in the space of machine learning researchers—cloud GPUs aimed at training large models. Right now, things are a little raw, a little bit bare metal. Or their AWS with all the existing problems. There's probably an opportunity to build something that's just a little bit more pleasant around that, that makes it easier to get done the type of workloads you want to do during model training, during the day-to-day work of machine learning research, and that just makes it easier to find where your data is, where your jobs are, how things are going, maybe integrate with services like Weights and Biases.
Do you think a company like Lambda is in a good position to evolve in that direction? Layer on more developer friendly tools? Or do you think that's more likely to come from a different type of company altogether?
I don't have any special inside information to suggest they would be the one to do it. But it does seem like NeoClouds are in an opportune position to do this.
I do think a word of caution though is that there are companies that have started to sell integrated software-hardware experiences. It's not exactly the same as the DigitalOcean analogy. Companies like Together.ai or I think there's one called Base10 are trying to put together "we're a place that will manage all of your GPUs for you and also we're going to have custom code that makes it easy or fast or high throughput to deploy your trained language models." Those levels of abstraction together don't seem to make great companies or great services.
I think Lambda providing something more like DigitalOcean but for GPUs is a thing that they are in an opportune place to do. I know also that they have a Lambda Cloud offering, which is like a DigitalOcean but you can only spin up one node at a time—you can't get a big networked cluster. But maybe with their one-click cluster idea that they've been working on for probably more than a year now, that might be the thing that unifies large multi-GPU training with the idea of a more developer-friendly or machine learning researcher-friendly cloud.
Have you ever encountered availability issues with Lambda in practice? For example, difficulty accessing certain types of GPUs at certain times, or limits on how many nodes you could spin up. How did that impact you if at all?
Initially, with the Lambda Cloud offering, which we did not make major use of, there were always GPU limitations. But this wasn't really a big problem or issue or concern for us.
With our reserved cloud, this isn't an issue because we just have complete 24/7 access to our reserved GPUs and exactly those GPUs.
Another version of this question is when we sign a contract with Lambda to get some number of GPUs set up, how long does that take them to get the machine put together? The answer there is that we have been fairly satisfied with lead times ranging from 3 to 6 months, and getting better over time. I think our more recent contract with them was more toward 3 months, and the one before that was more towards 6 months.
Earlier, you mentioned Together AI and others trying to move higher up the stack. Some of those companies lease GPU capacity rather than own it, and they tend to charge higher prices but offer more in terms of dev experience, APIs, and inference tooling. In your view, is the distinction between GPU infrastructure providers like Lambda that own their hardware and others like Together that lease and build software a meaningful one? And how do you think that plays into how each competes or evolves over time?
It is a meaningful distinction. I think it goes back to this training-deployment dichotomy.
In the training space, as a machine learning engineer and researcher, I want to build things that are bespoke and close to the metal, and that probably are going to break abstraction layers. So it is important that the provider own the hardware so that when the GPU goes down, they can get me another one, and we can guarantee certain quality of service. In general, they're responsible for bleeding-edge performance of their GPUs.
On the other hand, when it comes to inference, it really isn't that important for the provider to own the GPU. In fact, some of the most successful inference companies lately have been things like ModelWhere. What they are is a way of using AWS way better than you could use AWS yourself. Basically, you send them your work, they've paid for and reserved a bunch of AWS time, they run your work on AWS for you at a premium cost. The responsiveness, performance, etc. of your code is just much better.
So I think it really does come down to where you think you're the expert and where you think someone else can be the expert. And that then just comes down to abstraction because there are these transaction costs of building the right thing for you versus someone building something that may not be the right thing even if they've built a good version of it.
Looking ahead a few years, how do you see the cloud GPU space evolving? Are there trends or changes on the horizon you're watching that might reshape how you or others in the ecosystem approach training and inference?
Maybe a few parts to this answer. Part one is if Kubernetes could become a much more mature and widely adopted tool for machine learning training workloads, then it could be the case that GPUs could become a lot more fungible. It could be a lot easier to move workloads around, and it could be a lot easier to use GPUs on demand rather than get big reserved chunks of time, in the same way that on-demand single nodes are used for deployment right now in services like AWS.
This is held back by the fact that Kubernetes is not a mature ecosystem for machine learning training workloads. The idea of how you're going to move all of your data around and your big models around is not well settled. While marketplaces exist, there's not a super user-friendly and easy way to just say "I have a GPU workload and it's fully well-specified, I just want you to go run it" and provide a billing address—in the same way that you can do this if you just go to AWS and spin up an instance.
The second piece is it seems like hyperscalers, for their own uses, are setting up enormously sized data centers that consume tons of electricity. This could cause a giant increase for the whole field in the cost of GPUs because the power infrastructure is not able to keep up. This could create a power scarcity rather than necessarily a GPU scarcity. I'm not exactly sure how that would play out, but it could favor the larger clouds over the NeoClouds because of their ability to secure long-term utility contracts, or to do creative things like acquire companies that already have an existing ten-year utility contract and then basically throw away the rest of the company and just use the utility contract.
Third would be the entrance of other players besides NVIDIA. Right now, basically everything is just all about how many H100s or B200s are on offer. It's all the same software stack, and it's all a very small number of hardware SKUs, which makes workloads more portable.
One could imagine a situation in which AMD eventually, maybe on the time scale of five years, becomes a legitimate competitor for these types of workloads to NVIDIA. Should that happen, you will then have a big switching cost likely between AMD and NVIDIA, which might create a situation where you have "green cloud providers" (NVIDIA) versus "red cloud providers" (AMD). While it's relatively easy to switch within them, it may be hard to switch across them. That could create a little bit of pricing power by way of lock-in for these various clouds.
A fourth possibility is that if ChatGPT and similar really take off and agents really take off, it could be that GPUs just become basically unavailable to companies like mine because the economic value of just generating GPT-5, GPT-6 tokens for doing economic activity agentically is just so high compared to everything else. If that is the case, then the market will just be totally owned by these Frontier Labs.
Anything else you'd add that we haven't touched on yet you think is important for us or for investors to understand about the cloud GPU infrastructure space or where it's headed?
I'm still of the opinion that no one has done a good job of solving the DigitalOcean for machine learning researcher problem. There really is a big advantage to a cloud that can offer a much more pleasant developer experience on top of access to cheap GPUs.
I do think that on the deployment side, companies like Coiled and Model and various other things in that stack—Temporal and things like that—are starting to really see the value there and are starting to really fill in that market niche.
You mentioned earlier that your team looked at both Lambda and CoreWeave pretty closely before signing. Could you walk me through a bit more of that evaluation process? What were some of the key technical or business differences you noticed between Lambda and CoreWeave as you compared them side by side?
At that time, in late 2023, we were looking at both of them to provision an H100 cluster for a timescale of 12 to 18 months, maybe as long as 24 months. The requirements that we had are that we basically wanted it built to the HGX reference architecture and, critically, the interconnect.
With both providers, there was definitely some initial back and forth to get an idea of the spec that we really wanted, and then effectively, we just compared quotes back and forth between them. Ultimately, it did come down to a decision about price per GPU once everything was worked in.
We did speak with their engineering teams and their support teams to see what kind of infrastructure and stuff they had on the back end, how their software behind all of this was. We had to do some security assessments of both of them, some feeling out what our counterparty risk might be if one of these companies were to fold.
But ultimately, what it really came down to at the end was that they were extremely comparable, but Lambda was priced a bit cheaper.
How much weight did you place on customer support or responsiveness from the team? Did Lambda stand out in any way on that dimension?
That was hard for us to evaluate going in. It wasn't really clear which one we would get better support from. Both seemed very friendly. Both had engineers who were very knowledgeable.
I will say that Lambda brought their engineers to sales calls early, whereas CoreWeave—we had to ask for them to do that. So that might have just been culturally a little bit better of a fit for us because we're definitely more of an engineering-first company.
But that's really a vibes-based assessment. There wasn't really a KPI we could have pointed to that said we expected better service from one provider or the other.
Are there any specific workloads you see emerging in inference that you think will drive GPU demand going forward? Maybe things that are underappreciated right now, but could be big?
This is a little bit specific to my circumstances, but it might generalize to the space of reasoning, tool use, and agentic use.
In my particular use case, the preliminary use of GPUs before machine learning was physical simulations, and it remains in my domain that physical simulations still offer value. However, no one has yet figured out a way to really effectively integrate physical simulation on GPUs with machine learning on GPUs into one model.
The thing about physical simulation is that it is at inference time orders of magnitude more expensive than inference of machine learning models. So there probably is something that could lead to a situation where you could start to see inference rollouts of machine learning models that either call physical simulations, or part of the model is itself built as a physical simulation, but just in general demand a lot more GPU time in order to get a higher quality answer that brings in a lot more context. This domain could be in materials or in drug discovery or in various other biomedical domains. It could even go into places like finite element modeling and engineering.
I think there is a really big opportunity there for those things within those domains, which is to say big as a fraction of those domains.
Something that's analogous but more popular and well-known is the idea of agentic use of these models where they can roll out very large numbers of tokens, think about them, search the web, reflect on that information, and just kind of do loops and loops to reach an answer rather than trying to spit out an answer.
I think the other thing that's going to become more important for non-LLM chatbot use cases is that a lot of other areas—finance, legal, physical simulations, drug discovery, biomedical, engineering, payments—are all realizing the value of large transformer models. They're all becoming competent at scaling both the training and inference of large versions of these models.
So whereas these companies, if they're using machine learning, might have been using old-school machine learning, hundred-thousand parameter types of models in the recent past, they could be moving towards billion-plus parameter models in the near future across all of these domains. This is because the skills to train these things and the software for them have become widespread and widely available, as the advantages of scale are becoming more and more evidenced both in terms of papers showing this fact in the language domain, and other researchers in other domains picking up these tools and demonstrating their effect.
So I do think it's likely the case that there will be a thousand or so medium-sized transformer models across all these different domains in the near future.
Disclaimers
This transcript is for information purposes only and does not constitute advice of any type or trade recommendation and should not form the basis of any investment decision. Sacra accepts no liability for the transcript or for any errors, omissions or inaccuracies in respect of it. The views of the experts expressed in the transcript are those of the experts and they are not endorsed by, nor do they represent the opinion of Sacra. Sacra reserves all copyright, intellectual property rights in the transcript. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any transcript is strictly prohibited.