Editorial illustration of Meta and Broadcom’s multi-year partnership to build custom MTIA AI chips and scale hyperscale compute infrastructure through 2029.
Meta Platforms and Broadcom announced a sweeping multi-year strategic partnership on April 14, 2026, committing to co-develop Meta’s custom MTIA – Meta Training and Inference Accelerator – chips through 2029. Meta has pledged an initial deployment of more than 1 gigawatt of custom AI silicon, with plans to scale to multiple gigawatts over time. The MTIA chips will be the first AI accelerators built on a 2-nanometer process node.
The deal represents a significant deepening of a relationship that began when Meta first tapped Broadcom for custom silicon design. As part of the announcement, Broadcom CEO Hock Tan disclosed he will not stand for re-election to Meta’s board, transitioning instead to an advisory role focused on Meta’s custom silicon roadmap.
Why It Matters
The scale of the commitment is the headline. One gigawatt of AI compute capacity – as an opening position, not a ceiling – reflects the infrastructure requirements of the AI model development and inference workloads Meta is building toward. Meta’s total 2026 capital expenditure guidance stands at $115 billion to $135 billion, and the Broadcom partnership is one component of a multi-supplier strategy that also includes agreements for AMD Instinct GPUs, NVIDIA hardware, and Arm-based custom processors across 31 data centers, 27 of which are in the United States.
The 2-nanometer process node is a meaningful technical marker. Current leading-edge AI chips from NVIDIA and AMD are manufactured on 3nm or 4nm processes. Moving to 2nm delivers improvements in power efficiency and transistor density that directly affect training cost and inference throughput at gigawatt-scale deployments. For AI business observers, the timing also matters: Broadcom announced a comparable 3.5-gigawatt TPU deal with Google and Anthropic just weeks earlier, signaling that the company is positioning itself as the dominant custom silicon partner for hyperscaler AI infrastructure.
Broadcom’s stock rose approximately 3% in extended trading on the announcement. The company generates more than $8 billion per quarter in AI-related revenue, and the Meta deal extends that trajectory with contractual visibility through the end of the decade.
What’s Next
Four new MTIA chip generations are planned for deployment within the next two years, with the MTIA 300 already running Meta’s ranking and recommendation systems across its platforms. The roadmap suggests Meta intends to reduce its dependence on third-party GPU suppliers for inference workloads – the most cost-intensive phase of large-scale AI deployment – while continuing to rely on NVIDIA and AMD for training.
The governance change at Meta’s board is worth watching. Hock Tan’s move from director to advisor narrows the formal oversight relationship at a moment when Meta is committing to the largest custom silicon buildout in its history. Whether the advisory structure provides sufficient alignment between the two companies’ roadmaps will become clearer as the first multi-gigawatt deployments approach.
The broader implication for the AI chip industry is supply chain concentration risk. As Meta, Google, Microsoft, and Amazon each lock in multi-year custom silicon agreements with a small number of foundry and design partners, the flexibility of the market contracts. A process node delay, a packaging bottleneck, or a geopolitical disruption at TSMC – the manufacturer for virtually all 2nm production – would affect multiple hyperscalers simultaneously.
Sources: Meta · CNBC · GlobeNewswire
