MSI debuts modular ORv3 rack and AI-optimized platforms for high-density data centers

MSI has introduced its next-generation rack-scale hardware platform and a wide range of AI-optimized servers targeting data centers and hyperscale environments, as announced at SuperComputing 2025. The latest lineup includes the ORv3 21-inch 44OU rack, power-efficient multi-node servers, and platforms built on NVIDIA MGX and desktop NVIDIA DGX reference designs. MSI reports these modular systems are engineered for maximum performance, energy efficiency, and operational flexibility for mission-critical and high-density workloads.

The new solutions leverage the Datacenter Modular Hardware System (DC-MHS) architecture, which spans host processor modules, compute nodes, and AI server platforms. MSI claims the standardization of hardware and baseboard management controller (BMC) architecture streamlines operations and minimizes deployment complexity. With EVAC CPU heatsink support, the systems are designed to maintain thermal efficiency in demanding AI and analytics environments.

The ORv3 21-inch 44OU rack comes as a fully validated configuration, combining power, thermal, and networking components to enable rapid deployment in hyperscale data centers. It supports up to sixteen CD281-S4051-X2 2OU DC-MHS servers with centralized 48 V power shelves, front-facing input/output, and design choices focused on airflow and maintainability. Each CD281-S4051-X2 is a dual-node, single-socket AMD EPYC 9005 server platform with 12 DDR5 DIMM slots and 12 PCIe 5.0 x4 NVMe bays per node.

MSI also introduced multiple high-density compute configurations. Core Compute servers are available in both 2U 4-node and 2U 2-node designs, supporting either AMD EPYC 9005 Series or Intel Xeon 6 processors (up to 500 W thermal design power). Node-specific variants offer from 12 to 16 DDR5 DIMM slots and different NVMe storage bay configurations. Enterprise server models support up to 32 DDR5 DIMM slots and scale to high-core-count, high-TDP CPUs, with both AMD and Intel options in 1U and 2U chassis suited for cloud, virtualization, and storage workloads.

For AI and accelerated computing requirements, MSI has announced solutions built on the NVIDIA MGX and DGX Station architectures. Notable AI server platforms support dual AMD EPYC or Intel Xeon 6 CPUs, eight dual-width GPU slots capable of housing up to 600 W GPUs each, and large DDR5 memory capacities. Networking is delivered via up to eight 400G Ethernet interfaces with NVIDIA ConnectX-8 SuperNICs. For edge and small-scale deployments, 2U models support four high-wattage GPUs and up to 16 DDR5 DIMMs.

On the desktop, the MSI AI Station CT60-S8060 employs the NVIDIA GB300 Grace Blackwell Ultra chip with up to 784 GB unified memory, designed for desktop use where data center-level AI model development, training, and deployment are required.

“Our goal is to deliver scalable, energy-efficient infrastructure that empowers customers to accelerate AI development and next-generation computing with performance, reliability, and flexibility at scale,” said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI.

Source: MSI

Get Data Center Engineering News In Your Inbox:

Popular Posts:

695fcac850f073b041e711a2_karman-p-3200 copy
Karman launches 10 MW Heat Processing Unit for giga-scale AI data center cooling
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
1765906506220
Tritium launches 800 VDC bidirectional inverter for data centers and renewable energy sites
Grafika3-scaled copy
DCX announces 8.15 MW coolant distribution unit for 45°C warm-water cooling in AI data centers
68e79d30a17eea847251fae6_img-home-product-liquidjet-main
Frore Systems updates LiquidJet direct-to-chip coldplate for 1,950 W NVIDIA Rubin data center GPUs

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: