Celestica's new DS6000-series switches aim to solve the networking bottleneck in AI data centers, directly challenging existing infrastructure limits with 102.4 Tbps of throughput.
Back
Celestica's new DS6000-series switches aim to solve the networking bottleneck in AI data centers, directly challenging existing infrastructure limits with 102.4 Tbps of throughput.

Celestica Inc. is escalating the arms race for AI data center infrastructure, making its DS6000-series 1.6TbE switches available to order. The platform provides 102.4 terabits per second of throughput, aiming to break the network bottlenecks that constrain large-scale artificial intelligence training clusters.
"The move to 1.6TbE networking represents a monumental leap in data center evolution," said Gavin Cato, SVP & GM of AI Platform Engineering at Celestica. "We have engineered a solution that not only meets current throughput demands but helps future-proof the AI fabric for our global customers."
Powered by Broadcom's Tomahawk 6 silicon, the DS6000-series features 64 ports of 1.6TbE connectivity. The launch includes two models: the air-cooled DS6000 for standard 19-inch racks and the hybrid-cooled DS6001 for 21-inch OCP environments. The platform's open-standard design, using SONiC (Software for Open Networking in the Cloud), supports both copper and advanced optical interconnects to maximize architectural flexibility.
With global investment in AI infrastructure surging, as seen in I Squared Capital's planned $10 billion-plus expansion in Brazil for AI data centers, Celestica is positioning itself as a critical supplier for the industry's next phase. The DS6000's capacity directly addresses the massive bandwidth requirements of AI factories, intensifying competition for hardware providers catering to the hyperscale market.
The announcement signals the platform's move from development to commercial availability, placing Celestica among the first to ship systems based on Broadcom's latest Tomahawk 6 silicon. This is critical as the industry transitions to 102.4T switching to support the exponential growth of generative AI workloads, which demand unprecedented levels of east-west traffic within data centers for model training.
"By being among the first to ship systems powered by our Tomahawk 6 silicon, Celestica is delivering a high-density 1.6TbE networking fabric essential for scaling next-generation AI clusters," said Hasan Siraj, Vice President of Product Management in the Core Switching Group at Broadcom.
Industry analysts affirm Celestica's market position. "Celestica has established a leading position in high-speed data center switch port shipments," said Sameh Boujelbene, Vice President at Dell’Oro Group, noting their ability to deliver high-density 800G and 1.6TbE solutions.
A key element of Celestica's strategy is its reliance on open standards, including the Ultra Ethernet Consortium (UEC) and Open Compute Project (OCP) specifications. This approach contrasts with more proprietary, full-stack solutions and offers customers greater flexibility to avoid vendor lock-in. The use of the open-source SONiC network operating system reinforces this, providing a customizable and scalable software layer.
The availability of the DS6000 series through distributors like TD SYNNEX is set to accelerate adoption. "As AI and machine learning workloads scale exponentially, our partners need high-density, open-standard solutions that can eliminate complex networking bottlenecks," commented Dennis Levenson, Vice President, Vendor Management at TD SYNNEX.
The move to 1.6TbE is a direct response to the performance demands of modern 'AI factories.' As models become larger and more complex, the network fabric is often the limiting factor for training efficiency. By offering a 102.4 Tbps non-blocking platform, Celestica provides a path for hyperscalers and large enterprises to scale their infrastructure, a trend underscored by massive capital injections into the sector globally.
This article is for informational purposes only and does not constitute investment advice.