Sacra Logo
View PDF
View Model
Mountain View, CA
Jonathan Ross
Listed In
Home  >  Companies  >  Groq
Groq designs specialized processors and firmware for AI workloads.







Growth Rate (y/y)








Launched in 2017, Groq started shipping AI chips to customers in late 2020. We estimate it made ~4.3M in 2021, ~300% growth over 2020. Groq's revenue consists of selling AI chips, AI systems, and professional services, including customer support. Half of its revenue comes from data centers and the other half from autonomous vehicles. Its biggest customers are in financial services, autonomous vehicles, and Government labs like Argon National Labs.  



Note: Horizontal axis is on log scale for visual clarity. Size of the bubble indicates valuation.

Groq raised $362M from Tiger Global Management, D1 Capital Partners, TDK Ventures, and Social Capital. It’s valued at $1B, with a revenue multiple of 233x. Its privately held competitors also have high revenue multiples, with SambaNova at 200x and Graphcore at 560x, indicating a bet by VCs on the large size of the AI semiconductor market and the odds of the startups replacing Nvidia with their new, proprietary technologies. Publicly listed semiconductor companies trade at much lower multiples, with Nvidia at 14x, Intel at 2x, Qualcomm at 3x, and AMD at 7x.

Business Model

AI data centers have two types of workloads: training, where a large amount of data is fed into an AI model, and inference, where the AI model goes live. Data centers are at a point where large AI models are trained but not taken live because of the cost of setting up data centers.

Unlike training workloads, which run on GPUs, most data centers default to general-purpose CPUs for inference workloads. While GPUs with 1000s of cores provide a better throughput required for training, their smaller cores are too slow for sending real-time responses when a model is taken live. To compensate for the lack of data throughput,  tens of thousands of expensive CPUs are linked in parallel, escalating the cost of such data centers to many billions of dollars.

Groq is pitching its new chip architecture as a way to scale data centers for large inference workloads at lower costs and faster throughput to get a wedge into the AI data center market. GroqChip provides faster processing than general-purpose CPUs and scales better as thousands of GroqChips can be linked together without additional hardware or loss in performance. 

A fabless company, Groq designs the chips and uses Globalfoundries for fabrication. It sells them through OEM partners like Dell and cloud providers like Nimbix. It also sells integrated AI systems, such as GroqCard Accelerator that plugs into existing servers, Groq Node servers with 8 GroqChips inside a box, and GroqRack Compute Cluster with 64 inter-connected GroqChips.

Groq is at an early stage of rolling out these chips to customers and is only the second AI chip startup with its chip commercially available through a cloud service.



Groq makes specialized processors for AI workloads in data centers, branded as GroqChip. GroqChip is the first gen chip built on a 14nm architecture with a single core that achieves up to 250T floating-point ops per second (FLOPS). For comparison, Apple’s M2 is built upon 5nm architecture and achieves 3.6T FLOPS. Groq achieves much higher FLOPS while using older 14 nm technology through a combination of software-based design and simplified architecture.

Software-based design: Groq pushed controls and planning functions from the chip to the software, freeing up the Silicon surface in the chip for additional performance. Groq's software ensures that any computation takes the same time whenever it is run. Thus, you can link thousands of GroqChips in parallel without any of them waiting for other chips to finish computations and get high parallel processing without the technical overhead of keeping chips in sync, which is needed for general-purpose CPUs.

Simplified architecture: By pushing the controls to the software, Groq simplified processor architecture with one core where all transistors are doing one computation instead of wasting performance on synchronizing and passing messages between multiple cores, making GroqChip faster. Unlike other processors that rely on batching datasets to improve throughput, GroqChip can give high performance even at a batch size of 1, often required in real-time applications like autonomous vehicles and voice detection.



The AI hardware market is expected to grow 34% annually to reach $105B by 2025, with 80% coming from data centers. Even though more than 50 companies are making AI chips, the AI data center chip market is dominated by three companies, Nvidia ($390B), Intel ($123B), and AMD ($118B). They face competition from cloud platforms like AWS, Google Cloud, and Microsoft Azure, bringing proprietary AI chips in a box to market, and startups like Groq with specialized AI chips.


Nvidia used the GPUs, built primarily for gaming, to capture nearly 100% share of the AI training market, growing its market cap from $7B to $390B in less than ten years. It also has a ~20% share of the inference market, where it’s pushing hard with the launch of its first AI data center CPU, Grace, built on ARM’s platform.

Cloud platforms

All major cloud providers are designing custom chips. Google Cloud was the first, with its “tensor-processing unit,” Microsoft Azure has FPGAs, and AWS has the Inferentia chip, designed specifically for inference. In 5 to 10 years, someone like AWS can likely offer a cheap AI box with all AWS-made components, pushing the incumbents out of their dominant position.


AI chip startups raised ~$10B in 2021 from VCs, more than 3x the total funding in 2020. Unlike the smartphone or PC chips market, where once an architecture became dominant, all software was designed around it, the AI models are developed with TensorFlow or PyTorch, which can easily run on any new chip, lowering the entry barriers for startups.

None of the startups have yet broken out of the pack to pose a serious challenge to incumbents. Cerebras, and Graphcore have found public use cases at well-known research centers and AI startups, while Groq has been less visible.

TAM Expansion

In the last 18 months, the size of the AI models increased 50x, making it impossible to speed up AI workloads just by cramming more transistors into chips, as it’ll take chip companies 120 months (10 years) to increase transistor density by 50x. To deploy AI models trending towards 1T parameters, new types of chips and software stack that can provide low latency and high parallel processing are needed.

Next generation chip

Groq developed the gen 1 GroqChip frugally with less than $50M and has raised $300M as it works on the 2nd generation chip. The 1st gen chips are typically testbeds to get early customers and iterate to an improved design. Groq's software-led design approach lines up well with the inability of chips to become any larger or denser. Groq’s future market share depends on how much it can improve the performance of the next-gen chip and how closely it can partner with OEMs/cloud providers to sell it. 

New use cases

While Groq targets inference data center workloads, the GroqChip sees adoption in autonomous vehicles due to its high speed, low power consumption, and small batch size processing. The AI edge market is more fragmented than the data center market, making it easier for a new company to gain market share. In the past, startups that started with data centers successfully migrated to selling chips to edge companies. 


Long gestation period

New chips take 3 to 4 years to develop and another 2 to 3 years to become commercially viable, making the feedback loops for companies and investors very long. With incumbents like Nvidia and Intel aggressively releasing new specialized AI chips and using their distribution muscle to upgrade their customers to these new chips, there’s a risk that the startups, even if they have a better technology stack, will struggle to become commercially viable.

Talent availability

Unlike regular software development, chip design and development requires highly specialized talent and with so many companies building AI chips, only a few of them will be able to reach critical mass in terms of the talent needed to build some of the world’s fastest chips, irrespective of how much funding they can raise.


Jonathan Ross
CEO and founder
Adrian Mendes
Samidh Chakrabarti
Michelle Donnelly
Pj Jamkhandi
VP, finance and accounting
Edward Kmett
Head, software engineering
Jim Miller
VP, hardware engineering


This report is for information purposes only and is not to be used or considered as an offer or the solicitation of an offer to sell or to buy or subscribe for securities or other financial instruments. Nothing in this report constitutes investment, legal, accounting or tax advice or a representation that any investment or strategy is suitable or appropriate to your individual circumstances or otherwise constitutes a personal trade recommendation to you.

Information and opinions presented in the sections of the report were obtained or derived from sources Sacra believes are reliable, but Sacra makes no representation as to their accuracy or completeness. Past performance should not be taken as an indication or guarantee of future performance, and no representation or warranty, express or implied, is made regarding future performance. Information, opinions and estimates contained in this report reflect a determination at its original date of publication by Sacra and are subject to change without notice.

Sacra accepts no liability for loss arising from the use of the material presented in this report, except that this exclusion of liability does not apply to the extent that liability arises under specific statutes or regulations applicable to Sacra. Sacra may have issued, and may in the future issue, other reports that are inconsistent with, and reach different conclusions from, the information presented in this report. Those reports reflect different assumptions, views and analytical methods of the analysts who prepared them and Sacra is under no obligation to ensure that such other reports are brought to the attention of any recipient of this report.

All rights reserved. All material presented in this report, unless specifically indicated otherwise is under copyright to Sacra. Sacra reserves any and all intellectual property rights in the report. All trademarks, service marks and logos used in this report are trademarks or service marks or registered trademarks or service marks of Sacra. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any report is strictly prohibited. None of the material, nor its content, nor any copy of it, may be altered in any way, transmitted to, copied or distributed to any other party, without the prior express written permission of Sacra. Any unauthorized duplication, redistribution or disclosure of this report will result in prosecution.