The next wave of AI profits is moving from the chip designers to the data center builders, creating a new set of market leaders.
Back
The next wave of AI profits is moving from the chip designers to the data center builders, creating a new set of market leaders.

(Bloomberg) -- The investment narrative driving the artificial intelligence boom is expanding beyond primary chipmakers, with a growing focus on the critical data center infrastructure required to support large-scale AI models. An analysis from The Motley Fool identifies a shift toward networking and specialized inference hardware as the next phase of the AI supercycle.
"While the first wave of AI investment focused on the training hardware, the next wave will be driven by the networking that ties it all together and the efficiency of inference," the April 7 report said.
The analysis highlights data center networking specialists Broadcom (AVGO) and Arista Networks (ANET) as prime beneficiaries of this trend. It also points to Alphabet's (GOOGL) development of custom Tensor Processing Units (TPUs) as a key advantage in the increasingly important AI inference market, where efficiency and low operational costs are paramount.
This shift suggests that while companies like Nvidia captured the initial surge, investors are now looking for growth in the essential, but less-hyped, infrastructure layer. The potential for sustained demand in data center build-outs could re-order the list of top-performing AI stocks as the market matures.
The explosive growth of AI models has created an unprecedented demand for data center capacity, and more specifically, for high-speed networking to connect thousands of GPUs. Broadcom and Arista Networks are positioned directly in the path of this demand. Broadcom is a key supplier of high-bandwidth ethernet switches and custom silicon that are essential for AI networking fabrics. Arista Networks has built its business on high-speed, low-latency switches that are critical for the performance of large AI clusters. As enterprises and cloud providers race to build out their AI capabilities, the spending on this networking backbone is expected to grow substantially, providing a durable tailwind for both companies.
While much of the market has focused on the hardware for training AI models, the long-term cost of AI will be dominated by inference—the process of running trained models to generate answers. Alphabet's long-standing investment in its own custom TPU hardware gives it a significant edge in this area. Designed specifically for Google's workloads, TPUs can offer superior performance-per-watt for inference tasks compared to more general-purpose GPUs. This efficiency could translate into a multi-billion-dollar cost advantage as usage of its AI services scales, strengthening the competitive moat for its cloud division and its own AI-powered products.
This article is for informational purposes only and does not constitute investment advice.