Supermicro unveils U.S.-built data center AI platforms with NVIDIA Vera Rubin and rack-scale GPU systems

Supermicro has announced plans to deliver the NVIDIA Vera Rubin NVL144 and NVIDIA Vera Rubin CPX AI platforms in 2026, supporting more than triple the AI attention acceleration compared to previous Blackwell Ultra platforms. The company is also introducing new data center systems—including a compact 2OU NVIDIA HGX B300 eight-GPU server in an OCP-based rack-scale design—that can scale up to 144 GPUs per rack. All new government-optimized systems are developed and validated at Supermicro’s San Jose, California facilities to ensure compliance with the Trade Agreements Act (TAA) and Buy American Act, supporting U.S. federal procurement requirements.

Supermicro reports that it is expanding its portfolio to support full-stack NVIDIA AI Factory for Government reference designs. This expansion includes the Super AI Station, deskside AI supercomputer powered by the NVIDIA GB300 Superchip, with more than five petaFLOPS (PFLOPS) of compute performance in a 5U tower form factor. The system supports up to 784 GB of coherent memory and integrated NVIDIA ConnectX-8 SuperNIC, featuring closed-loop direct-to-chip liquid cooling and optional rack-mounting. The Super AI Station is designed for on-premises AI model training and prototyping for organizations requiring low latency and high data security.

For rack-scale deployments, Supermicro has announced the general availability of its ARS-121GL-NB2B-LCC NVL4 rack-scale platform, optimized for GPU-accelerated high performance computing (HPC) and AI workloads such as simulation, modeling, and genomics. Each 4U node packs four NVIDIA Blackwell B200 GPUs and two Grace Superchips, direct-to-chip liquid cooling, and up to 800G dedicated bandwidth per GPU through NVIDIA Quantum InfiniBand networking. The platform supports up to 128 GPUs in a 48U rack for high-density data center scenarios, with power supplied via busbar.

Supermicro is also planning to integrate newly announced NVIDIA networking technologies, including NVIDIA BlueField-4 Data Processing Unit (DPU) and NVIDIA ConnectX-9 SuperNIC, across its AI factory solutions to enable faster cluster-scale AI networking and storage, as well as data processing offload. Supermicro’s modular system architecture is designed to allow rapid adaptation of these new capabilities with minimal re-engineering.

The company targets U.S. government and enterprise data center customers involved in cybersecurity, data analytics, engineering, healthcare, modeling, simulation, and secure virtualized environments, aiming to meet stringent compliance and supply chain security standards with U.S.-based production and validation.

“Our expanded collaboration with NVIDIA and our focus on U.S.-based manufacturing position Supermicro as a trusted partner for federal AI deployments,” said Charles Liang, president and CEO, Supermicro. “The result of many years of working hand-in-hand with our close partner NVIDIA—also based in Silicon Valley—Supermicro has cemented its position as a pioneer of American AI infrastructure development.”

Source: Supermicro

Get Data Center Engineering News In Your Inbox:

Popular Posts:

Boyd-Unveils-a-new-2-Megawatt-High-Capacity-Coolant-Distribution-Unit-for-Liquid-Cooled-AI-Data-Centers-478x478-1 copy
Boyd launches 2 megawatt coolant distribution unit to boost liquid cooling in AI data centers
ZincFive debuts nickel-zinc UPS cabinet for AI data centers BC2AI-5-2
ZincFive debuts nickel-zinc UPS cabinet for AI data centers
Figure-2
SuperX unveils 800VDC power solutions for high-density AI data centers
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
Boyd_Rack_Emulator_-_Copy
Boyd introduces rack emulator for liquid cooling validation on NVIDIA GB200 NVL72 platforms

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: