Supermicro GPU servers accelerate AI factory deployment for Lambda at Cologix data center

Super Micro Computer has announced that Lambda has deployed a wide array of Supermicro GPU-optimized servers, including systems based on NVIDIA Blackwell architecture, to expand Lambda’s artificial intelligence (AI) infrastructure. The deployment is part of an initiative launched in June 2024 at the Cologix COL4 Scalelogix data center in Columbus, Ohio, giving enterprise customers in the Midwest region access to advanced AI compute resources.

According to Supermicro, Lambda’s selection of servers includes the SYS-A21GE-NBRT with NVIDIA HGX B200, SYS-821GE with NVIDIA HGX H200, and SYS-221HE-TNR, all featuring Intel Xeon Scalable processors. The setup also integrates Supermicro’s AI Supercluster with NVIDIA GB200 and GB300 NVL72 racks, designed to support large-scale training and inference workflows in AI data centers.

Supermicro reports that its advanced liquid cooling technology reduces power and cooling costs, supporting Lambda’s requirements for energy efficiency and rapid deployment. The collaboration aims to enable Lambda to deploy next-generation AI accelerators at scale, with specific applications targeting data center environments and AI factories serving enterprises and hyperscale operators.

A representative from Lambda stated, “Lambda is on a mission to accelerate the path to Superintelligence, building gigawatt-scale AI factories for training and inference for the world’s top AI labs, enterprises and hyperscalers,” said Ken Patchett, VP Data Center Infrastructure, Lambda. “As we strive to create infinite-scale compute for our customers, the depth of Supermicro’s server portfolio is a valuable asset for meeting our present and future needs.”

The collaboration also leverages Cologix’s interconnected data center footprint, intended to support low-latency, high-capacity AI workloads across Columbus and the greater Midwest. Supermicro, Lambda, and Cologix are targeting rapid AI development for sectors such as healthcare, finance, manufacturing, retail, and logistics, with the intention of enabling production-ready AI solutions that are compatible with hyperscaler environments.

Source: Super Micro Computer

Get Data Center Engineering News In Your Inbox:

Popular Posts:

695fcac850f073b041e711a2_karman-p-3200 copy
Karman launches 10 MW Heat Processing Unit for giga-scale AI data center cooling
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
68e79d30a17eea847251fae6_img-home-product-liquidjet-main
Frore Systems updates LiquidJet direct-to-chip coldplate for 1,950 W NVIDIA Rubin data center GPUs
1765906506220
Tritium launches 800 VDC bidirectional inverter for data centers and renewable energy sites
Screenshot
HC Capital Partners and Herrmann Family Companies plan 1,500-plus-acre Energy Ranch power-linked data center campus in South Texas

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: