TLDR
- Broadcom TPUs cost $10,500-$15,000 versus Nvidia’s $40,000-$50,000 Blackwell chips, making them attractive for AI inference work
- UBS projects Broadcom will ship 3.7 million TPUs in 2026 and generate $60 billion in AI revenue
- Nvidia purchased Groq’s inference technology license for $20 billion to strengthen its competitive position
- Cisco’s new Silicon One G300 chip targets the $600 billion AI infrastructure market with 28% speed improvements
- Meta Platforms and Anthropic are both adopting Broadcom TPUs as alternatives to Nvidia processors
The AI chip market is experiencing a shift as Broadcom and Cisco introduce products that compete directly with Nvidia’s dominant processors. Companies are seeking more affordable options for running AI systems at scale.
Broadcom has developed Tensor Processing Units in partnership with Google that cost significantly less than Nvidia’s graphics processing units. The price gap is substantial, with TPUs selling between $10,500 and $15,000 compared to $40,000 to $50,000 for Nvidia’s Blackwell chips.
UBS analyst Timothy Arcuri predicts Broadcom will ship approximately 3.7 million TPUs this year. Shipments are expected to exceed five million units by 2027. This growth reflects increasing demand from AI companies looking for cost-effective alternatives.
Major Customers Switch to TPUs
Anthropic has ordered $21 billion worth of TPUs in two separate purchases. Meta Platforms is also negotiating to use the processors, according to The Wall Street Journal. These deals represent a turning point as TPU sales expand beyond Google to external clients.
Broadcom forecasts AI revenue of $60 billion in 2026, climbing to $106 billion in 2027. Nvidia expects around $300 billion in data center sales for fiscal 2027, ending next January. The revenue gap shows Nvidia still leads but faces growing competition.
TPUs excel at AI inference, the process where models generate answers and results. Nvidia maintains superiority in model training. Benchmarks show that training a model takes 35 to 50 days on Nvidia GPUs but roughly three months on TPUs.
Mizuho analysts estimate inference currently represents 20% to 40% of AI workloads. They project this will grow to 60% to 80% within five years. This trend favors TPUs, which are optimized for inference tasks.
Nvidia Fights Back with Groq Deal
Nvidia recently acquired a nonexclusive license for technology from AI hardware startup Groq. The company paid $20 billion, including compensation for Groq employees who joined Nvidia. Groq specializes in inference hardware, addressing Nvidia’s competitive weakness.
Cisco Launches Networking Solution
Cisco Systems introduced its Silicon One G300 switch chip for AI data centers. The chip will ship in the second half of 2026 and competes with offerings from both Nvidia and Broadcom. Taiwan Semiconductor Manufacturing Company will produce it using 3-nanometer technology.
Martin Lund, executive vice president of Cisco’s common hardware group, said the chip includes features that prevent network slowdowns during data traffic spikes. The system automatically reroutes data around problems within microseconds. Cisco claims the chip makes some AI computing tasks 28% faster.
Networking has become critical in AI infrastructure. Nvidia’s latest systems include networking chips that compete with Cisco products. Broadcom offers its Tomahawk chip series in the same space.
The three companies are competing for market share in the $600 billion AI infrastructure spending boom. Each focuses on different aspects, from model training to inference to networking. The average selling price for TPUs is expected to reach $20,000 in coming years as demand increases and technology improves.



