Meta Platforms is doubling down on its custom silicon strategy, signaling a long-term move to control its own AI destiny and reduce its dependency on Nvidia.
Back
Meta Platforms is doubling down on its custom silicon strategy, signaling a long-term move to control its own AI destiny and reduce its dependency on Nvidia.

Meta Platforms will co-develop multiple generations of custom artificial intelligence processors with Broadcom through 2029, securing an initial commitment of over one gigawatt of computing capacity in a direct challenge to Nvidia's dominance in the AI hardware market.
The tie-up helps "build out the massive computing foundation we need to deliver personal superintelligence to billions of people," Meta CEO Mark Zuckerberg said in a statement.
The expanded deal builds on Meta's existing partnership with the chip designer and is the first phase of a planned multi-gigawatt rollout. The agreement also includes using Broadcom's Ethernet networking technology to connect Meta's AI computer clusters. As part of the deal, Broadcom CEO Hock Tan will step down from Meta's board, which he joined in 2024, to take on an advisory role for Meta's custom chip strategy.
The move underscores a growing trend among tech giants like Google and Amazon to design their own application-specific integrated circuits (ASICs) to curb soaring costs and supply constraints associated with Nvidia's powerful but expensive GPUs. For Broadcom, which saw its stock jump 3.5 percent on the news, the deal solidifies its position as a key enabler of the custom AI chip boom.
Meta's push for custom silicon centers on its Meta Training and Inference Accelerator (MTIA) program. The first chip, the MTIA 300, is already used in the company's ranking and recommendation models. The company has a roadmap for three more advanced versions through 2027, which will be geared toward inference—the process of running trained AI models to generate responses. While Meta's chips are for internal use, the strategy mirrors that of Google, which has been producing its own Tensor Processing Units (TPUs) with Broadcom since 2015, and Amazon's development of its Trainium and Inferentia chips.
The expanded partnership is a significant win for Broadcom, coming just two weeks after it announced a similar long-term agreement to produce TPUs for Google. These deals make Broadcom a primary beneficiary of the tech industry's shift away from a total reliance on general-purpose GPUs from Nvidia and AMD, creating a massive market for specialized, cost-effective ASICs.
The strategic shift has clear financial motivations. By developing in-house chips, Meta aims to gain more control over its hardware-software stack, optimize performance for its specific AI workloads, and significantly lower its long-term capital expenditures on costly Nvidia hardware. While Meta's stock was little changed on the announcement, the deal reinforces its long-term investment in AI, a narrative that has strong support from Wall Street.
According to TipRanks, 39 of 45 analysts rate Meta stock a "Buy," with an average price target of $856.08, implying a 35 percent upside from current levels. The success of the MTIA program, enabled by this Broadcom partnership, will be a critical factor for investors watching whether Meta can achieve its ambitious AI goals without seeing its margins eroded by hardware costs.
This article is for informational purposes only and does not constitute investment advice.