Celestica and AMD say they’re collaborating to bring AMD’s new “Helios” rack-scale AI platform to market, with Celestica taking on R&D, design, and manufacturing for the platform’s scale-up networking switches. The announcement ties AMD’s rack-scale architecture to Celestica’s networking switch work, aimed at large-scale AI clusters.
Celestica said the switches it will develop for “Helios” will be based on the Open Compute Project Open-Rack-Wide (ORW) form factor. The companies said the scale-up switches will use “advanced networking silicon” to enable high-speed interconnect for the next-generation AMD Instinct MI450 Series GPUs, and that the design will use Ultra Accelerator Link over Ethernet (UALoE) for scale-up connectivity. AMD said “Helios” is planned to be available to customers in late 2026.
For data center engineers, the key detail here is that Celestica’s scope is specifically the rack-scale scale-up fabric inside AMD’s “Helios” architecture, not the full platform build. Scale-up connectivity is the rack-local (or tightly coupled) network layer that’s typically used to connect multiple accelerators with high bandwidth and low latency characteristics compared to scale-out networking between racks. The PR doesn’t provide port counts, bandwidth per port, radix, power, thermals, or supported topologies for the switches, but it does put the design direction on the record: OCP ORW mechanicals and UALoE as the scale-up interconnect over Ethernet.
Steven Dorwart, senior vice president and general manager, Hyperscalers, Celestica, said, “Our collaboration with AMD on the ‘Helios’ platform brings together our global engineering, manufacturing, and supply chain capabilities with AMD’s innovation in high-performance computing.” Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD, said “’Helios’ represents a new blueprint for AI infrastructure,” and highlighted “performance, efficiency, and flexibility” as goals of the platform.
The companies also said they’re collaborating to support deployments of “Helios” across cloud, enterprise, and research environments, with an emphasis on reducing “time-to-value” and improving supply chain resiliency for organizations investing in AI. The announcement doesn’t include pricing, customer names, or a deployment timeline beyond AMD’s statement that “Helios” will be available to customers in late 2026.
Source: Celestica






