TLDR
- Google will provide Anthropic with up to 1 million Tensor Processing Units (TPUs) in a deal worth tens of billions of dollars, set to deploy in 2026
- The arrangement will add over 1 gigawatt of computing capacity to support Anthropic’s AI operations
- Anthropic’s annual revenue has reached $7 billion, with Claude serving more than 300,000 businesses
- Google has invested $3 billion in Anthropic, while Amazon has committed $8 billion and serves as the primary cloud provider
- Anthropic uses a multi-cloud strategy across Google TPUs, Amazon Trainium chips, and Nvidia GPUs to optimize costs and performance
Google and Anthropic announced a major cloud computing agreement on Thursday. The deal gives Anthropic access to up to 1 million of Google’s custom-designed Tensor Processing Units.
The partnership is valued at tens of billions of dollars. It represents one of the largest hardware commitments in the AI industry to date.
The TPUs will be deployed starting in 2026. They will bring more than 1 gigawatt of computing capacity online for Anthropic’s operations.
Industry experts estimate that a 1-gigawatt data center costs around $50 billion to build. About $35 billion of that cost typically goes to chips and hardware.
Anthropic was founded by former OpenAI researchers. The company has grown rapidly over the past two years.
The AI startup now serves more than 300,000 businesses through its Claude language models. This represents a 300-fold increase in just two years.
Anthropic’s annual revenue run rate has reached $7 billion. The company counts large enterprise customers who each contribute more than $100,000 in annual revenue.
Claude Code, the company’s coding assistant, generated $500 million in annualized revenue within two months of launch. Anthropic claims this makes it the fastest-growing product in history.
Multi-Cloud Infrastructure Strategy
Anthropic uses a multi-cloud approach for its computing needs. The company runs workloads across Google’s TPUs, Amazon’s Trainium chips, and Nvidia’s GPUs.
Each platform handles specialized tasks like training, inference, and research. This strategy allows Anthropic to optimize for price, performance, and power consumption.
Google praised the efficiency of its TPU technology. “Anthropic’s choice to expand its usage of TPUs reflects the strong price-performance its teams have seen,” said Google Cloud CEO Thomas Kurian.
The deal includes access to Google’s seventh-generation Ironwood accelerator. These chips are designed specifically for machine learning workloads.
Amazon remains Anthropic’s largest investor and primary cloud provider. The retail giant has invested $8 billion in the AI company, more than double Google’s $3 billion investment.
AWS built a custom supercomputer called Project Rainier for Claude. It runs on Amazon’s Trainium 2 chips, which help reduce computing costs.
Investment and Growth
Google first invested $2 billion in Anthropic in 2023. The company added another $1 billion in early 2025, bringing its total investment to $3 billion.
Rothschild & Co analyst Alex Haissl estimated that Anthropic added one to two percentage points to AWS growth in recent quarters. This contribution is expected to exceed five percentage points in the second half of 2025.
The number of large Anthropic customers has grown nearly sevenfold in the past year. This customer growth drives demand for more computing capacity.
Anthropic maintains control over its model weights, pricing, and customer data. The company has no exclusivity agreements with any cloud provider.
The multi-cloud approach proved useful during Monday’s AWS outage. Claude continued operating normally because of its diversified infrastructure across multiple providers.



