MiTAC shows OCP-compliant servers and liquid-cooled AI rack at CloudFest 2026

MiTAC Computing is spotlighting a mix of AI-ready servers, OCP-compliant platforms, and a liquid-cooled rack-scale system at CloudFest 2026, with demo collaborations involving AMD and Intel. The lineup targets high-density GPU compute and cloud deployments where power efficiency, serviceability, and thermals tend to be the limiting factors long before raw CPU specs are.

On the GPU server side, MiTAC highlighted the MiTAC G4520G6, built around Intel Xeon 6 processors and supporting up to eight double-width PCIe Gen5 GPUs. MiTAC also detailed the HG68-B8016, a five-node platform integrating five single-socket nodes based on AMD EPYC 4005 Series processors, with DDR5-5600 memory and NVMe storage per node. For dual-socket GPU compute, the TN85-B8261 supports up to four dual-slot GPUs and includes 24 DDR5-6400 RDIMM slots plus tool-less NVMe storage carriers.

For OCP-aligned infrastructure, MiTAC listed several platforms aimed at higher-density and more modular deployments. The C2810Z5 is an OCP-compliant server supporting AMD EPYC 9005/9004 processors with E1.S and U.2 NVMe storage configuration options, an optimized thermal design, and support for dense ORv3 deployments. The C2811Z5 is an OCP multi-node server based on AMD EPYC 9005 Series processors, with 12 DDR5-6400 memory slots per node (up to 3 TB per node) and NVMe E1.S storage. On the Intel side, the R2520G6 uses dual Intel Xeon 6 processors, supports up to 32 DDR5-6400 RDIMMs, and scales to up to 24 U.2 NVMe drives, using an OCP-style modular architecture for networking, management, and high-bandwidth I/O. MiTAC also called out the M2810Z5 as a cloud-optimized enterprise server using OCP-aligned networking and storage, including OCP NIC 3.0 and E1.S NVMe, in a multi-node design.

The most rack-scale configuration described was MiTAC’s MR1100 series, a high-density 48U EIA liquid-cooled rack aimed at large-scale AI training. MiTAC said the system integrates AMD Instinct MI355X GPUs—up to eight per node—and AMD EPYC 9005 Series CPUs with up to 6 TB of memory per unit, scaling to 64 to 256 GPUs. The rack uses cold-plate cooling and AMD Pensando Pollara 400 AI NICs, with a stated 400/800 Gb/s network fabric.

Liquid cooling and OCP-style modularity are both responses to the same blunt constraint: as racks get denser, you need hardware designs that reduce friction in servicing, cabling, and thermals, or your operations model breaks before your compute plan does.

MiTAC also said it will present an on-stage case study with Qarnot, focused on a high-performance computing deployment in France across aerospace, automotive, energy, and banking.

Source: MiTAC Computing Technology Corporation

Get Data Center Engineering News In Your Inbox:

Popular Posts:

Elvis-Leka,-New-Product-Development-Engineer-—-Parker,-Sporlan-Division
From air to two-phase liquid: how rack cooling options compare on density and risk
IT-Pod-2-1200x675
Flex launches 800 VDC power rack for NVIDIA Vera Rubin AI platforms
FalconXpr
AI data center networks: Xscape launches 8-wavelength ELSFP laser module
Near-Packaged-Optics--Rethinking-the-AI-Data-Center-Interconnect
Near-packaged optics: rethinking the AI data center interconnect
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: