Home  >  Companies  >  Anthropic
Anthropic
API and chatbot for developers and businesses to access Claude large language models

Revenue

$5.00B

2025

Funding

$6.98B

2024

Details
Headquarters
San Francisco, CA
CEO
Dario Amodei
Website
Milestones
FOUNDING YEAR
2021

Revenue

Sacra estimates that Anthropic hit $5B in annual recurring revenue (ARR) in July 2025, up from $1B at the end of 2024. The company is currently projecting $9B in ARR by the end of 2025.

Enterprise and startup API calls continue to drive 70-75% of Anthropic's revenue through pay-per-token pricing, with Claude Sonnet 4 maintaining rates of $3 per million input tokens and $6 per million output tokens.

Major customers include Cursor, which surpassed $500M in annual recurring revenue, along with established partners like Sourcegraph, GitLab, and Bridgewater Associates. The company distributes its models primarily through AWS Bedrock and Google Vertex AI, while consumer subscriptions like Claude Pro ($20/month) and Claude Team ($30/month) account for 10-15% of revenue.

Code generation remains the primary revenue driver, with Anthropic's models recognized as industry-leading for programming tasks, outperforming competitors including OpenAI according to internal evaluations.

The February 2025 launch of Claude Code as a standalone product further strengthened this position. Development workflows continue to generate substantial token usage, with single completions or multi-file operations consuming 5,000-20,000 tokens compared to standard chat interactions.

Valuation

Anthropic was last valued at $61.5B in their March Series E, led by Lightspeed Venture Partners and with participation from General Catalyst, Jane Street and Fidelity Management & Research Company.

Based on their $1.4B in ARR for March 2025, Anthropic was valued at 43.9x forward revenue. The company has raised approximately $14.3B in total funding commitments, including significant investments from Salesforce Ventures and Zoom Video Communications. Their most recent funding includes major commitments from Microsoft ($2B), Amazon ($4B), and Google ($550M).

Product

None

Anthropic’s flagship product is the Claude family of large language models, which as of May 2025 includes Claude Opus 4 and Claude Sonnet 4. These models represent a significant leap forward in performance, particularly for advanced reasoning and software development tasks.

Claude Opus 4 is designed for high-complexity tasks with state-of-the-art coding capabilities, while Claude Sonnet 4 offers fast, cost-efficient performance for everyday enterprise workflows. Both models support hybrid reasoning modes—responding instantly when speed is prioritized or engaging in deeper, multi-step thinking when accuracy is critical.

Claude models now support context windows of up to 200,000 tokens in the enterprise tier, allowing for processing of long documents and sustained conversations across sessions. Anthropic has also equipped Claude 4 with enhanced tool use, long-term memory, and integration capabilities.

Models can invoke external APIs, access live documents, and remember relevant facts across interactions. These upgrades allow Claude to function as an interactive assistant rather than a static chatbot.

To standardize these integrations, Anthropic introduced the Model Context Protocol (MCP), a new open standard that enables secure, real-time connections between Claude and enterprise systems.

Developers can now link Claude to proprietary datasets, internal APIs, and software tools using MCP, making it easier to deploy Claude as an embedded intelligence layer inside larger applications.

Anthropic also launched Claude Code, a dedicated development tool for software engineers. Integrated via command-line and editor plugins like VS Code and JetBrains, Claude Code provides AI-powered pair programming, debugging, and multi-file code editing.

With benchmarks like 72.5% on SWE-bench, Claude Code is now regarded as one of the most capable coding assistants on the market. Companies like Cursor, GitLab, and GitHub have adopted Claude for developer productivity, with GitHub planning to deploy Claude Sonnet 4 inside Copilot for enhanced instruction-following and code quality.

Business Model

Anthropic makes money in a few ways: via usage of its chatbot Claude, and via its AI models.

1. Token-based API revenue

Approximately 70–75% of Anthropic’s revenue comes from pay-per-token API calls. Customers are charged per million tokens processed across inputs and outputs, with different rates depending on the model:

Claude Opus 4: $15 per million input tokens, $75 per million output tokens

Claude Sonnet 4: $3 per million input tokens, $15 per million output tokens

Claude Haiku (lightweight model): $0.25 per million input tokens, $1.25 per million output tokens

These APIs are used directly by enterprises or indirectly via third-party applications. Claude is accessible on AWS Bedrock, Google Vertex AI, and Databricks, making it easy for customers to integrate it into existing cloud workflows. This distribution strategy positions Anthropic as a model provider for multiple ecosystems, including those controlled by Amazon and Google—both of which are strategic investors.

High-usage workflows—especially in code generation, document analysis, and research—can consume tens of thousands of tokens per session. This leads to substantial recurring revenue for Anthropic even at relatively low cost per token.

2. Subscriptions

Claude is also available as a direct-to-consumer chatbot at Claude.ai, with several pricing tiers:

Claude Pro ($20/month): Access to higher usage limits and priority service.

Claude Max ($100/month or $200/month): For heavy users needing significantly higher throughput and larger response sizes.

Claude Team ($30/user/month, 5-seat minimum): Adds collaboration features and admin tools.

Claude Enterprise (custom pricing): Includes longer context windows, higher throughput, security features like SSO, and auditability.

These subscriptions account for 10–15% of total revenue. Usage within these tiers is still token-based, with model access governed by rate limits and session size.

3. Reserved capacity and enterprise commitments

For large customers, Anthropic offers reserved capacity and guaranteed throughput in exchange for fixed-rate contracts. This is especially important for mission-critical deployments where latency and availability must be controlled. Customers purchase “model units” that guarantee performance regardless of broader platform load.

Competition

OpenAI

OpenAI is still the market leader in terms of adoption, revenue, and product scope. Backed by Microsoft and integrated across the Microsoft ecosystem, OpenAI's GPT-4 remains the most widely used model in enterprise applications through Azure OpenAI Service and in consumer use via ChatGPT. As of mid-2025, OpenAI was generating roughly $13 billion in annualized revenue, driven by massive scale across both enterprise APIs and ChatGPT Plus subscriptions.

GPT-4 remains a top-tier general-purpose model, particularly for tasks requiring broad world knowledge, creative generation, and reasoning across modalities. OpenAI has also expanded into other modalities with DALL-E for images, Whisper for audio transcription, and Sora for video generation. The breadth of the offering gives OpenAI a strong moat, particularly among companies looking for an all-in-one AI provider.

Anthropic has been able to compete by outperforming GPT-4 in key areas like context length (200K tokens vs. GPT-4 Turbo's 128K), code generation (Claude Opus 4 leads SWE-bench and other benchmarks), and price (Claude Sonnet is up to 80% cheaper per token).

Still, OpenAI's dominance in mindshare and Microsoft’s enterprise sales muscle give it an ongoing distribution advantage. Claude often plays the role of a complement to GPT-4 in companies running a multi-model strategy, but rarely displaces it entirely.

Google

Google consolidated its AI research under the DeepMind brand and released the Gemini family of models to compete with GPT-4 and Claude. Gemini 1 launched in late 2023, and Gemini 2.5 Pro became the flagship model for text and code by mid-2025.

Gemini models are natively multimodal, with capabilities in reasoning over charts, images, documents, and tables. Google also brings a unique edge in compute scale and proprietary data, including access to Gmail, Docs, and Search usage data for fine-tuning.

DeepMind positions Gemini as a foundational capability across Google’s ecosystem: Gemini powers generative features in Workspace (Docs, Sheets, Gmail), Android’s AI assistant, and Bard’s replacement chatbot. The company’s emphasis is on full-stack integration rather than third-party API consumption.

While Claude is offered through Google’s Vertex AI, it competes directly with Gemini on that same platform. Google’s control over the infrastructure stack, model training, and end-user interface gives it a major advantage—but also limits its openness to external deployment, which Anthropic can use as a wedge with enterprise customers.

Meta

Meta has become the primary force behind open-source LLMs, releasing the LLaMA family of models under a permissive license. LLaMA 2 and 3 saw widespread adoption by startups and hobbyists, and LLaMA 4 extended that lead with larger models and improved benchmarks. Meta’s strategy is to commoditize the base model layer, using its own models internally while allowing others to build on top of them. This undermines the moat around closed-source models like Claude and GPT-4.

Anthropic’s biggest risk from Meta is not direct competition—Meta does not sell model access via API—but the acceleration of the open-source ecosystem. LLaMA 3 and Mistral 7B have become the foundation for dozens of fine-tuned models that businesses can run privately.

For companies with the technical resources to host and tune their own models, the appeal of zero marginal cost can outweigh the benefits of Claude’s safety or reliability. That said, Claude continues to outperform LLaMA-based models on longer, more complex tasks where safety and steerability matter.

TAM Expansion

Anthropic’s addressable market has expanded significantly in 2025, driven by enterprise adoption of Claude for productivity, software development, and document analysis. The company’s investments in long-context capabilities, tool use, and model integrations position it to capture a growing share of AI spend across sectors.

Advanced virtual assistants

With Claude 4’s 200,000-token context window and improved memory, Claude is now able to act as a true AI assistant for long-form tasks. Companies are using Claude to summarize meeting transcripts, draft responses to customer tickets, write internal documentation, and generate strategic reports. Unlike previous generations of chatbots, Claude can read and remember hundreds of pages of content and carry that information across multi-turn interactions. This enables its use in complex workflows that previously required human handoffs, including cross-departmental knowledge management and executive support.

Anthropic is also building toward more autonomous, agent-like functionality. Claude can now run in “extended reasoning” mode, call external tools through APIs, and write to persistent memory files to maintain state. This makes Claude suitable for use cases like sales prospecting, data research, and operations automation—roles where Claude can serve as a reliable junior analyst or assistant.

Code generation

Code generation has become one of the largest drivers of AI usage. With Claude Opus 4, Anthropic has pushed deeper into the developer tooling market, offering capabilities on par with or better than OpenAI’s Codex and GPT-4 models for multi-file reasoning, debugging, and test generation.

Anthropic launched Claude Code, a command-line and IDE-integrated assistant, to extend Claude’s presence inside the development environment. Use cases include pair programming, refactoring, dependency management, and autonomous agent-style code editing. By enabling Claude to persist memory and reason across large codebases, Anthropic is expanding from a conversational assistant to a semi-autonomous engineering collaborator.

This opens up TAM not just within engineering orgs but across any company building with software—startups integrating AI into their workflows, enterprise teams maintaining legacy systems, and agencies using AI for client deliverables.

Platform integration and agents

With the launch of MCP (Model Context Protocol), Anthropic is positioning Claude as a platform for integrating AI into existing business systems. MCP allows developers to plug Claude into real-time data sources, knowledge bases, and SaaS applications, enabling AI workflows that extend beyond simple Q&A.

This creates the foundation for Claude to operate as part of larger agentic systems—teams of AI instances that coordinate tasks, execute code, and interact with software interfaces. Claude can already handle tool invocation and memory persistence, two prerequisites for autonomous agent behavior. As orchestration layers mature, Anthropic is well-positioned to power the backend intelligence layer across support, operations, and back-office automation.

These developments grow Claude’s addressable market from “chat-based interfaces” to broader categories of enterprise AI infrastructure, RPA (robotic process automation), and intelligent agents.

Risks

Compute constraints: Anthropic’s model development depends on access to scarce AI compute—specifically H100 GPUs and other high-end chips—which are in limited global supply. Any disruption in availability or spike in compute pricing could delay Claude training cycles, degrade product performance, or force a scale-back in model ambitions.

Structural profitability: Like other foundation model developers, Anthropic faces high variable costs tied to inference and training. While revenue is growing rapidly, its margins remain constrained by the cost of running large models. Without sustained cloud credits or additional funding, the company may struggle to reach profitability at scale.

Regulatory scrutiny: Anthropic is subject to emerging global AI regulation, including the EU’s upcoming AI Act and evolving U.S. oversight. As a foundation model provider, it may be required to disclose training data sources, implement model monitoring, and undergo external audits—raising costs and introducing legal risk.

Funding Rounds

Share Name Issue Price Issued At
Series E-3 $56.0865 Jan 2025
Series E-1 $56.0865 Jan 2025
Series E-5 $53.2822 Jan 2025
Series E-4 $50.4779 Jan 2025
Series E-2 $20.928 Jan 2025
Share Name Issue Price Issued At
Series D-1 $30.0045 May 2024
Series D-3 $30.0045 May 2024
Series D-2 $27.0041 May 2024
Share Name Issue Price Issued At
Series C-1 $11.2261 May 2023
Series C-2 $11.2261 May 2023
Share Name Issue Price Issued At
Series B $11.2261 Feb 2023
Share Name Issue Price Issued At
Series A $2.5656 May 2021
View the source Certificate of Incorporation copy.

News

DISCLAIMERS

This report is for information purposes only and is not to be used or considered as an offer or the solicitation of an offer to sell or to buy or subscribe for securities or other financial instruments. Nothing in this report constitutes investment, legal, accounting or tax advice or a representation that any investment or strategy is suitable or appropriate to your individual circumstances or otherwise constitutes a personal trade recommendation to you.

This research report has been prepared solely by Sacra and should not be considered a product of any person or entity that makes such report available, if any.

Information and opinions presented in the sections of the report were obtained or derived from sources Sacra believes are reliable, but Sacra makes no representation as to their accuracy or completeness. Past performance should not be taken as an indication or guarantee of future performance, and no representation or warranty, express or implied, is made regarding future performance. Information, opinions and estimates contained in this report reflect a determination at its original date of publication by Sacra and are subject to change without notice.

Sacra accepts no liability for loss arising from the use of the material presented in this report, except that this exclusion of liability does not apply to the extent that liability arises under specific statutes or regulations applicable to Sacra. Sacra may have issued, and may in the future issue, other reports that are inconsistent with, and reach different conclusions from, the information presented in this report. Those reports reflect different assumptions, views and analytical methods of the analysts who prepared them and Sacra is under no obligation to ensure that such other reports are brought to the attention of any recipient of this report.

All rights reserved. All material presented in this report, unless specifically indicated otherwise is under copyright to Sacra. Sacra reserves any and all intellectual property rights in the report. All trademarks, service marks and logos used in this report are trademarks or service marks or registered trademarks or service marks of Sacra. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any report is strictly prohibited. None of the material, nor its content, nor any copy of it, may be altered in any way, transmitted to, copied or distributed to any other party, without the prior express written permission of Sacra. Any unauthorized duplication, redistribution or disclosure of this report will result in prosecution.