Home  >  Companies  >  OpenAI
OpenAI
AI research lab offering GPT models via API and ChatGPT for consumers

Revenue

$13.00B

2025

Growth Rate (y/y)

194%

2025

Funding

$40.00B

2025

Details
Headquarters
San Francisco, CA
CEO
Sam Altman
Website
Milestones
FOUNDING YEAR
2015

Revenue

Sacra estimates that OpenAI hit $13B in annualized revenue in July 2025, doubling from $6B in January and up 3x from $4B in 2024. The acceleration tracks with rising adoption of ChatGPT products across consumers and enterprises, as weekly active users grew to 700M in July, up from 500M in March, and paying business users surpassed 5M, up from 3M in June.

OpenAI monetizes primarily through ChatGPT subscriptions, which account for the bulk of revenue, complemented by API usage and enterprise sales of tools like ChatGPT Deep Research. The company has added productivity features like spreadsheet and presentation editing to drive deeper use cases, while offering bundling discounts on enterprise contracts.

Tens of millions of users are now paying subscribers, and the company appears on track to exceed its 2025 revenue projection of $12.7B. With usage growth outpacing efficiency gains, OpenAI also raised its 2025 cash burn forecast to $8B, up from $7B earlier in the year.

Valuation

OpenAI is valued at $300 billion as of March 2025, following a $40B Series F led by SoftBank ($30B commitment), nearly doubling from its $157B October 2024 valuation.

The round includes Microsoft, Thrive Capital, Altimeter, and Coatue, bringing total funding to approximately $57.9B. Half of the $40B funding is contingent on OpenAI completing its conversion from a nonprofit to a Delaware Public Benefit Corporation by December 2025.

The company has raised about $64B in total primary funding to date, including strategic investments from Microsoft, Nvidia, and SoftBank. Other key backers include Thrive Capital, Khosla Ventures, and Abu Dhabi’s MGX.

Product

OpenAI was founded in 2015 as a non-profit AI research lab and later restructured into a capped-profit company to attract funding. Over time, it has developed a focused suite of AI models and consumer products across text, image, and audio generation.

As of 2025, OpenAI’s core models as used in ChatGPT include:

GPT-4o (May 2025): OpenAI’s flagship multimodal model capable of understanding and generating text, images, and audio with high performance and low latency. GPT-4o became the default engine for ChatGPT (both free and paid) and API requests, replacing the earlier GPT-4 Turbo.

o3 (January 2025): OpenAI’s cost-efficient GPT-4–class model that powers the free ChatGPT tier and many price-sensitive API workloads. It offers strong general performance with a 32K context window and is optimized for broad consumer usage.

o3-pro (March 2025): A higher-throughput, higher-context version of o3 with a 64K context window. Designed for enterprise customers who need more capacity than o3 but with lower latency and cost than premium models.

o4-mini and o4-mini-high (early 2025): Lightweight GPT-4 variants optimized for faster responses and lower costs. These power enterprise offerings (ChatGPT Team/Enterprise) where latency and cost-efficiency are critical.

GPT-5 (expected 2025): OpenAI’s next-generation model is reportedly slated for release as early as August 2025. GPT-5 is designed to integrate OpenAI’s “o-series” and GPT-series into a single system, enabling a more versatile AI that can utilize various tools and perform diverse tasks seamlessly.

ChatGPT

OpenAI’s breakthrough consumer product is the ChatGPT assistant, which in 2022–2023 brought large language models into the mainstream. By mid-2025, ChatGPT has evolved into a real-time, multimodal, voice-enabled agent used by an estimated 700 million people weekly.

Users employ ChatGPT for coding help, research and Q&A, writing assistance, personal tutoring or therapy, and more. With the introduction of Voice Mode (voice input/output) alongside GPT-4o, ChatGPT supports live spoken conversations, effectively combining a Siri-like interface with GPT-level intelligence.

It also gained native image understanding and generation capabilities, allowing users to input images and receive AI-generated images in responses.

Critically, ChatGPT’s functionality has expanded beyond basic chat. It can execute code and use tools like web browsers and plugins, turning it into an agent that can take actions on a user’s behalf. For example, ChatGPT can search the web in real time, manipulate files, or interface with third-party services.

OpenAI has added productivity integrations such as editing spreadsheets and creating slide presentations within ChatGPT. This blurs the line between a chatbot and a cloud-based productivity suite, as users can ask ChatGPT to analyze data, generate charts, or draft presentation slides without leaving the app.

API

Beyond ChatGPT, OpenAI also provides developer APIs for direct model access. The OpenAI API (launched 2020) lets developers embed GPT capabilities into their own applications on a pay-per-use basis.

Model access through the API has continually improved – from the original GPT-3 to the latest GPT-4o – with major boosts in speed and cost-efficiency in 2025. OpenAI has introduced features like function calling (letting the model return structured data that can trigger programmatic functions) and an Assistants API for building persistent AI agents that can use tools and remember context.

The API also encompasses other model endpoints like the DALL-E 3 image generator and Whisper speech-to-text, reflecting OpenAI’s broader AI offerings beyond text. All together, OpenAI’s product ecosystem – spanning ChatGPT (consumer and enterprise), developer APIs, and specialized tools – positions the company as a leading AI platform in 2025.

Business Model

OpenAI monetizes its technology via a combination of subscription services and usage-based fees, with an emphasis on turning its popular ChatGPT into a revenue engine.

Subscriptions

Paid subscriptions to ChatGPT account for the majority of OpenAI’s revenue. The flagship offering is ChatGPT Plus for consumers (introduced 2023 at $20/month), which provides faster responses and early access to new features.

This plan has amassed roughly 15 million active subscribers as of mid-2025, making it the single largest revenue source. Building on this, OpenAI rolled out higher-priced tiers: ChatGPT Pro at $200/month for power users (with expanded usage limits and priority access), ChatGPT Team (around $25–30 per user/month for small businesses), and ChatGPT Enterprise (custom-priced, roughly $60 per seat at list price) for large organizations.

These higher tiers, launched in late 2023 and 2024, have quickly grown the business user base to around 2 million paying business users by early 2025 (including educational users on a discounted plan). OpenAI incentivizes enterprise adoption by offering bundle discounts (on the order of 10–20% off) for large deployments. It also continues to add value to subscriptions – for example, the inclusion of new spreadsheet and presentation tools in ChatGPT Plus/Enterprise directly challenges Microsoft and Google’s productivity suites.

APIs

The second major revenue stream is the API and licensing business. Developers pay usage-based fees to access OpenAI’s models via cloud API, which typically costs on the order of $0.03 per 1K tokens for GPT-4 and $0.002 per 1K tokens for GPT-3.5, among other pricing tiers.

This model-as-a-service business contributes an estimated 15–20% of OpenAI’s total revenue. While smaller in share, it is strategic: by powering hundreds of third-party applications and enterprise software (for instance, powering features in apps like Notion, Salesforce, or Bing), the API extends OpenAI’s reach and cements its models as a de facto platform.

OpenAI also earns some licensing income from partnerships – for example, the company’s deal with Microsoft integrates GPT-4 into Bing and Azure services (Microsoft effectively pays OpenAI for API usage, entangled with their broader investment deal).

Hybrid structure

OpenAI’s unusual hybrid structure—combining a capped-profit, for-profit subsidiary with a controlling nonprofit parent—shapes how the company’s investors and employees are ultimately compensated. This structure was designed to allow the organization to raise significant outside capital while preserving a mission-aligned governance framework.

Microsoft’s $13B investment in OpenAI over the past few years reflects both the company’s capital intensity and this hybrid incentive structure. Microsoft does not hold equity in OpenAI LP; instead, it receives a share of profits. Early investors and employees are entitled to returns capped at 100× their principal. Once OpenAI becomes profitable, those earliest investors get paid back first. Then, 25% of all profits go to early investors and employees (until they hit their cap), while 75% go to Microsoft until it recoups its $13B in principal.

After Microsoft has recovered its $13B, the split flips: Microsoft receives 50% of profits until it reaches a total return of $92B—at which point it too hits its cap. Once that happens, OpenAI reverts fully back to nonprofit control and retains 100% of future profits.

This structure functions like a hedge: it allows OpenAI to raise the capital it needs to survive in a compute-intensive, uncertain market, while preserving a long-term mission-focused structure if the company succeeds. It also helps explain why OpenAI has been so aggressive in monetizing ChatGPT so early—it’s not just about product-market fit, but also about proving that the capped-profit structure can sustain a cutting-edge AI company at scale.

Competition

OpenAI’s biggest competitors to date are Google, who have their own decade-plus long research in AI now coming to fruition, Meta, whose LLaMa language model competes with GPT-4 from an open source direction, and competing private AI research laboratory Anthropic.

Google

Google has long been a leader in AI research and now directly competes with OpenAI in large language models. In 2023, Google combined its Brain and DeepMind units to accelerate development of its next-generation AI, a multimodal model known as Gemini. Gemini is expected to handle text, images, and other modalities in an integrated way, aiming to match or surpass GPT-4’s capabilities.

Google’s key advantages are data and compute: it can train models on enormous troves of user data from Google Search, Gmail, YouTube, Android, etc., which are proprietary assets unavailable to OpenAI. Moreover, Google effectively owns the world’s largest AI computing infrastructure, from custom TPUs to vast data centers. Estimates suggested Google could afford to train models with 5× the computing power of GPT-4 by end of 2023 and 20× by end of 2024.

This scale might yield more advanced models or cheaper inference. By 2025, Google has integrated AI into many products (e.g. Bard, an LLM chat for search; AI-assisted features in Google Workspace) and is widely expected to launch Gemini for external use. If Gemini delivers superior performance or unique integration (e.g. deeply woven into Android or Chrome), it could erode ChatGPT’s appeal.

Google also benefits from ecosystem control – for instance, it can distribute AI features to billions of users via Chrome or Android updates – and from a cash-rich core business (search advertising) that can subsidize free AI offerings. OpenAI, lacking such distribution channels and ad revenue, must rely on partnership (like the default Bing integration in ChatGPT) to reach end-users.

Meta

Meta (Facebook) has emerged as a major competitor by open-sourcing powerful language models. In early 2023, Meta’s LLaMA model (65B parameters) was leaked and then intentionally released as LLaMA 2 under a permissive license by mid-2023. This marked a turning point – for the first time, a top-tier model rivaling GPT-3/GPT-4 was available to the public and researchers.

Developers worldwide have since built upon LLaMA variants, fine-tuning them for specific tasks and even running them on local hardware. Meta’s strategy is to drive AI advancement through openness, which in turn pressures OpenAI’s proprietary approach.

LLaMA 3 is anticipated by late 2024 or 2025, potentially extending Meta’s leap in making cutting-edge models widely available. In addition to model quality, Meta enjoys a massive infrastructure advantage: it reportedly possesses the second-largest installation of NVIDIA H100 GPUs (after Google), giving it ample capacity to train and deploy AI models at scale.

Meta’s long history in AI research (from PyTorch to advanced projects like CICERO and Segment Anything) means it has deep expertise. While Meta doesn’t directly monetize its models (they are released for free use), its open-source releases threaten to commoditize the core technology. If anyone can download a model nearly as good as GPT-4 and run it cheaply, OpenAI could lose pricing power or see customers opt to fine-tune local models for cost or privacy reasons.

OpenAI is already facing competition from startups that deploy open models at lower costs. Meta’s open approach also garners goodwill and a community of developers, which could indirectly benefit Meta’s own products and reputation.

Anthropic

Anthropic is a San Francisco-based AI lab founded in 2021 by former OpenAI researchers (including Dario and Daniela Amodei) as a more safety-focused, enterprise-oriented rival. Anthropic’s flagship model is Claude, an AI assistant similar to ChatGPT.

From the start, Claude differentiated itself with an ultralarge context window (up to 100,000 tokens), allowing it to digest very long documents or even book-length texts in one prompt.

This made Claude attractive for corporate use cases like analyzing lengthy financial reports or legal documents, where ChatGPT’s earlier context limit (~4K–32K tokens) was insufficient. Claude is also tuned for a more cautious, “helpful and harmless” style, which businesses appreciate for reliability in things like customer service chatbots.

Anthropic’s focus on B2B use cases and being a “model provider” (rather than building consumer apps) has paid off. By mid-2025, Anthropic experienced extraordinary revenue growth – from about $1B annualized at the start of 2025 to $4B in annualized revenue by mid-2025.

This surge is likely driven by large contracts with cloud providers and enterprises (Anthropic has partnerships with Google Cloud and Amazon AWS, and powers AI features in products like Notion and Quora). Investor enthusiasm is high: Anthropic is reportedly closing a new funding round of ~$5B led by Iconiq at a $170B valuation, up sharply from a $20B valuation in early 2024. Such backing gives Anthropic resources approaching OpenAI’s.

In competition with OpenAI, Anthropic positions Claude as the safer, enterprise-friendly alternative to ChatGPT – essentially “OpenAI for companies that don’t want to rely on OpenAI.” Many organizations adopt Claude to diversify their AI stack or to avoid dependency on a single vendor. If OpenAI ever stumbles (in uptime, pricing, or PR), Anthropic stands to benefit as the primary second source. Anthropic’s close ties with major cloud platforms (Google invested in 2022; Amazon in 2023) also ensure Claude is well-distributed (e.g. offered as a service on AWS) and integrated into other enterprise tools. Overall, Anthropic has rapidly become OpenAI’s most direct peer in large models, competing on quality, safety, and context length.

TAM Expansion

OpenAI’s initial market was simply providing AI text generation via chat and API. By 2025, however, the company is aggressively expanding its TAM by pushing into new domains and deeper into the tech stack.

OpenAI’s long-term vision is to become an “intelligence layer” in both consumer and enterprise settings. In practice, this means moving beyond just answering questions to executing tasks, facilitating commerce, and integrating with user devices and infrastructure. Key vectors of TAM expansion include:

Enterprise agents

OpenAI is evolving from assisting humans to autonomously performing work on their behalf. With the introduction of GPT-4’s function calling and the “Assistants API,” developers and enterprises can create AI agents that carry out multi-step operations, not just single responses.

OpenAI has enabled these agents to use tools, browse the web, call APIs, and even control software via a desktop UI. For example, instead of a human filling out forms or clicking through enterprise SaaS menus, a GPT-4o-powered agent could handle tasks like filing expenses, scheduling meetings, updating CRM entries, or processing invoices based on a simple instruction.

Early versions of this are seen in ChatGPT’s ability to act as a coding assistant that executes code or as a plugin-based agent that can order groceries or book travel. In companies, such AI agents could cut down the need for junior administrative work or customer support roles, effectively shifting spending from human labor to AI services. OpenAI’s models would take a small fee for every task completed, which at scale across millions of tasks becomes a significant new revenue stream – a kind of meter on productive knowledge work.

If this trend takes off, OpenAI could tap into budgets currently spent on business software licenses or even labor outsourcing (for routine white-collar work). The competitive advantage here will be having the most reliable and capable agents, which OpenAI hopes to secure with its head start in model capabilities and its controlled ecosystem for tool use (plugins vetted for ChatGPT, etc.).

Enterprise agents also deepen lock-in: once an organization builds an AI workflow around OpenAI’s models, it may become as indispensable as an operating system.

From search to transactions

Another expansion vector is turning conversational AI into a commerce platform. In 2024, OpenAI (and others like Microsoft) began experimenting with integrating shopping and services directly into chat interactions.

Rather than referring a user out to a search engine or e-commerce site, ChatGPT can act on a query like “I need a birthday gift for my 5-year-old niece” by showing one perfect product suggestion and a “buy” button right there.

This fuses the traditional roles of search (finding information) and e-commerce (transaction) into one step. OpenAI stands to capture value through affiliate fees, lead generation payments from merchants, or even advertising in this paradigm. For instance, brands might pay to have their product be the one recommended for certain queries – a new form of AI-age ad placement.

Users benefit from convenience (no need to wade through lists of links or reviews – the AI uses its judgment to present an optimal choice). OpenAI has partnered with platforms like Stripe for payments, and with retailers and aggregators to source real-time product info via plugins. If ChatGPT becomes a trusted purchase assistant, it could take a cut of a huge volume of online sales.

This positions OpenAI not just as a software provider, but as a participant in the retail transaction value chain. It’s a high-margin opportunity (commissions can far exceed API token prices) and vastly expands TAM into sectors like shopping, travel booking, food ordering, and other consumer services. In effect, ChatGPT could become an AI concierge for everything – capturing a share of e-commerce without holding inventory or logistics (similar to how Google takes ad fees for steering customers to businesses).

AI operating system

OpenAI is also moving closer to the operating system layer, aiming to become a pervasive interface for users across devices.

In 2023–2024, ChatGPT was accessed mainly via a web browser or mobile app, but by 2025 OpenAI introduced native desktop applications for Mac and Windows.

The ChatGPT desktop app lets users call up the AI with a keystroke, use drag-and-drop (e.g. dropping a screenshot or document for analysis), and maintain conversation context across sessions. Moreover, OpenAI has implemented a form of long-term memory in ChatGPT: the AI can remember past interactions or user preferences over time, rather than each session starting fresh.

This means ChatGPT begins to act more like a persistent personal assistant that knows you – for example, it might recall your family members’ names mentioned in past chats, or your ongoing projects at work, to better assist you continuously. On mobile, there are deep integrations as well; notably, Apple’s iOS 19 introduced an “Intelligence Handoff” that allows Siri to pass complex user requests to ChatGPT behind the scenes.

Rather than building its own phone OS, OpenAI is collaborating with platform providers to embed GPT’s capabilities at the system level. The ultimate goal is for GPT-based assistance to be ubiquitous – available in every app, every context, whenever a user needs to solve a problem or automate a task.

If OpenAI’s AI becomes as fundamental as an operating system, the TAM extends to potentially every digital interaction a person has. This could open up usage-based monetization analogous to an OS license or app store cut, and it greatly increases user switching costs (if your life/work is interwoven with an AI that knows all your context, you wouldn’t easily switch to a competitor).

However, it also puts OpenAI in more direct competition with platform owners (like Apple, Google, Microsoft) who have their own assistant offerings – highlighting the importance of OpenAI’s strategy to partner (as with Microsoft on Windows, and with Apple’s shortcuts/Siri, etc.) to secure distribution.

Risks

Compute constraints: OpenAI’s progress is tightly bound to access of expensive AI compute (specialized GPUs or future AI chips). The company’s model development and operations require tens of thousands of GPU cards running in parallel, a resource that is in limited supply globally. Any supply crunch or cost spike in compute could slow OpenAI’s model improvements or make its services uneconomical.

Structural profitability: OpenAI currently does not have a clear path to profitability, given its astronomical spending on R&D and infrastructure. Unlike early internet or software companies that enjoyed high margins, OpenAI’s gross margins (~40%) are constrained by variable compute costs. While revenue is skyrocketing, expenses are rising just as fast – the company expects to burn $8B in cash in 2025 on compute and other costs. Cumulative losses will continue to mount (projected $14B in total losses by 2026 at current rate).

Regulatory scrutiny: As one of the most visible AI companies, OpenAI is under growing scrutiny from governments and regulators worldwide. Authorities are concerned about issues ranging from data privacy, to AI-generated misinformation, to the impact on jobs. In the EU, the proposed AI Act would impose stringent requirements on “foundation model” providers like OpenAI – such as requiring disclosure of training data and transparency in AI outputs.

News

DISCLAIMERS

This report is for information purposes only and is not to be used or considered as an offer or the solicitation of an offer to sell or to buy or subscribe for securities or other financial instruments. Nothing in this report constitutes investment, legal, accounting or tax advice or a representation that any investment or strategy is suitable or appropriate to your individual circumstances or otherwise constitutes a personal trade recommendation to you.

This research report has been prepared solely by Sacra and should not be considered a product of any person or entity that makes such report available, if any.

Information and opinions presented in the sections of the report were obtained or derived from sources Sacra believes are reliable, but Sacra makes no representation as to their accuracy or completeness. Past performance should not be taken as an indication or guarantee of future performance, and no representation or warranty, express or implied, is made regarding future performance. Information, opinions and estimates contained in this report reflect a determination at its original date of publication by Sacra and are subject to change without notice.

Sacra accepts no liability for loss arising from the use of the material presented in this report, except that this exclusion of liability does not apply to the extent that liability arises under specific statutes or regulations applicable to Sacra. Sacra may have issued, and may in the future issue, other reports that are inconsistent with, and reach different conclusions from, the information presented in this report. Those reports reflect different assumptions, views and analytical methods of the analysts who prepared them and Sacra is under no obligation to ensure that such other reports are brought to the attention of any recipient of this report.

All rights reserved. All material presented in this report, unless specifically indicated otherwise is under copyright to Sacra. Sacra reserves any and all intellectual property rights in the report. All trademarks, service marks and logos used in this report are trademarks or service marks or registered trademarks or service marks of Sacra. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any report is strictly prohibited. None of the material, nor its content, nor any copy of it, may be altered in any way, transmitted to, copied or distributed to any other party, without the prior express written permission of Sacra. Any unauthorized duplication, redistribution or disclosure of this report will result in prosecution.