TLDR:
- Larry Fink predicts compute futures will emerge as a new tradable asset class similar to oil and grain.
- Data centers are set to consume 70% of all memory chips produced globally across 2026.
- DRAM supply grows at just 16% annually while AI infrastructure demand surges at over 80% per year.
- HBM memory from Samsung, SK Hynix, and Micron is fully sold out through 2026 and into 2027.
Compute scarcity is emerging as one of the most pressing structural challenges facing the global technology and financial sectors.
BlackRock CEO Larry Fink, who oversees $11.5 trillion in assets, addressed this growing concern at the Milken Institute Global Conference. His remarks pointed to severe shortages across power, chips, memory, and compute infrastructure.
The warnings from the world’s largest asset manager carry significant weight for investors, institutions, and policymakers navigating the AI-driven economy.
Fink Frames Compute as the Next Tradable Commodity
Larry Fink delivered a pointed message at the Milken Institute Global Conference. He stated plainly, “We just don’t have enough compute.” That four-word phrase captured a structural problem now gripping global technology markets.
Fink went further, predicting that compute will evolve into a fully tradable asset class. He said, “I actually believe a new asset class will be buying futures of compute.” This mirrors how markets currently price oil, grain, and natural gas.
The logic behind his prediction is straightforward. Demand for compute is growing faster than any supply chain can match. Forward contracts on future capacity would allow investors to hedge against ongoing shortages.
This perspective is notable because it comes from a capital allocator, not a technologist. When the chairman of BlackRock frames compute as a derivative market in the making, institutional money tends to follow.
Supply Data Confirms the Depth of the AI Resource Crunch
The numbers behind Fink’s warning are striking and verifiable. Data centers are projected to consume 70% of all memory chips produced globally in 2026. That alone shows how concentrated AI infrastructure demand has become.
Advanced HBM memory production from Samsung, SK Hynix, and Micron is sold out through 2026 and into 2027. A single AI server consumes 10 to 20 times more memory than a standard workload server. That gap explains why shortages are so acute.
DRAM supply growth is running at just 16% annually. Meanwhile, AI infrastructure demand is expanding at over 80% per year. That mismatch is not a temporary dislocation — it is a structural imbalance with a long runway.
Fink also addressed the “AI bubble” narrative directly. He said, “There is not an AI bubble. There is the opposite. We have supply shortages.
Demand is growing much faster than anyone has ever anticipated.” That statement counters the bearish thesis that AI investment is overextended.
For investors, the message is clear. The chip crunch, power shortage, and compute deficit are likely to worsen before any meaningful supply relief arrives.
Companies positioned along the supply chain — in chips, memory, power, and compute infrastructure — remain central to any serious AI investment thesis.



