Anthropic’s rapid revenue growth and massive TPU compute commitment signal a new phase in the AI infrastructure race.
Anthropic has crossed $30 billion in annualized revenue – surpassing OpenAI – and locked in one of the largest compute commitments in AI history: 3.5 gigawatts of Google TPU capacity supplied by Broadcom, starting in 2027.
The revenue figure represents a 233% increase from roughly $9 billion at the end of 2025. Enterprise customers spending more than $1 million annually doubled from 500 to over 1,000 in under two months. Claude Code alone accounts for more than $2.5 billion of the annualized run rate.
Why It Matters
The numbers reorder the AI lab hierarchy. OpenAI reported approximately $24–25 billion in annualized revenue as of late March 2026, making Anthropic’s crossing of $30 billion a meaningful milestone – not just in scale, but in the speed of the climb. Four months ago, the gap looked insurmountable.
The compute deal is equally significant. Broadcom disclosed the 3.5GW agreement in an SEC 8-K filing, adding to the 1GW of TPU capacity already coming online this year. The contract runs through 2031 and includes future TPU generations. Mizuho analysts estimate Broadcom will earn $21 billion in AI revenue from Anthropic in 2026 alone, rising to $42 billion in 2027. For the AI Business sector broadly, the deal validates Google’s TPU ecosystem as a credible alternative to NVIDIA for frontier model training at scale.
Anthropic CFO Krishna Rao described it as “our most significant compute commitment to date.” Broadcom’s filing included a standard risk caveat: consumption of the expanded capacity depends on Anthropic’s continued commercial success – a reminder that run-rate figures extrapolate recent momentum rather than lock in future results.
What’s Next
Anthropic’s next move is likely a further push into enterprise. The doubling of million-dollar accounts in under two months suggests the sales engine is accelerating faster than product development – which creates both opportunity and execution risk as the company scales support and customization for large clients.
The 2027 delivery timeline for the 3.5GW block means Anthropic is already planning for a compute envelope several times larger than today’s. That scale implies model generations – and capability jumps – that aren’t yet publicly on the roadmap.
For rivals, the pressure is structural. Securing multi-gigawatt TPU capacity years in advance is a moat that takes time and capital to replicate. Companies that haven’t made comparable infrastructure commitments will face an increasingly asymmetric compute race heading into 2027 and 2028.
Anthropic’s position has shifted from challenger to pacesetter. The question now is whether revenue growth can justify the infrastructure bets being placed today.
Sources: Anthropic · TechCrunch · CNBC · The Register
