Helios rack-scale AI platform to use Celestica ORW switches for AMD MI450 GPUs

Celestica and AMD say they’re collaborating to bring AMD’s new “Helios” rack-scale AI platform to market, with Celestica taking on R&D, design, and manufacturing for the platform’s scale-up networking switches. The announcement ties AMD’s rack-scale architecture to Celestica’s networking switch work, aimed at large-scale AI clusters.

Celestica said the switches it will develop for “Helios” will be based on the Open Compute Project Open-Rack-Wide (ORW) form factor. The companies said the scale-up switches will use “advanced networking silicon” to enable high-speed interconnect for the next-generation AMD Instinct MI450 Series GPUs, and that the design will use Ultra Accelerator Link over Ethernet (UALoE) for scale-up connectivity. AMD said “Helios” is planned to be available to customers in late 2026.

For data center engineers, the key detail here is that Celestica’s scope is specifically the rack-scale scale-up fabric inside AMD’s “Helios” architecture, not the full platform build. Scale-up connectivity is the rack-local (or tightly coupled) network layer that’s typically used to connect multiple accelerators with high bandwidth and low latency characteristics compared to scale-out networking between racks. The PR doesn’t provide port counts, bandwidth per port, radix, power, thermals, or supported topologies for the switches, but it does put the design direction on the record: OCP ORW mechanicals and UALoE as the scale-up interconnect over Ethernet.

Steven Dorwart, senior vice president and general manager, Hyperscalers, Celestica, said, “Our collaboration with AMD on the ‘Helios’ platform brings together our global engineering, manufacturing, and supply chain capabilities with AMD’s innovation in high-performance computing.” Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD, said “’Helios’ represents a new blueprint for AI infrastructure,” and highlighted “performance, efficiency, and flexibility” as goals of the platform.

The companies also said they’re collaborating to support deployments of “Helios” across cloud, enterprise, and research environments, with an emphasis on reducing “time-to-value” and improving supply chain resiliency for organizations investing in AI. The announcement doesn’t include pricing, customer names, or a deployment timeline beyond AMD’s statement that “Helios” will be available to customers in late 2026.

Source: Celestica

Get Data Center Engineering News In Your Inbox:

Popular Posts:

Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
Near-Packaged-Optics--Rethinking-the-AI-Data-Center-Interconnect
Near-Packaged Optics: Rethinking the AI Data Center Interconnect
30cf-data-center-pr
Carrier launches AquaEdge 30CF chiller to boost data center cooling reliability and uptime
shine 的複本 的複本 - 36
GenerMotor launches stackable HVDC generator modules for AI data center power
Low-chill_graphic
HRL Low-Chill single-phase liquid cooling targets high-density GPU racks with low pressure drop

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: