Saitech delivers Supermicro B300 NVIDIA Blackwell AI servers with data center liquid cooling, power and rack-ready design

Saitech has announced it is offering the Supermicro B300 AI Server based on the NVIDIA Blackwell HGX B300 NVL8 platform, positioning it as enterprise-ready infrastructure for larger AI models and more demanding training and inference workloads. Saitech says it is an authorized Supermicro partner and works with Supermicro technical teams to access, configure, and validate systems for production environments.

According to the announcement, the HGX B300 NVL8 platform integrates eight SXM-based NVIDIA Blackwell GPUs interconnected with NVLink and NVSwitch. Saitech notes the platform combines high-bandwidth HBM3e GPU memory with next-generation NVLink fabric so the system can operate as a unified accelerator for large-scale model parallelism. The company lists target outcomes including faster training of large language models, higher throughput for generative AI and multimodal inference, and scalable performance for high-performance computing and scientific workloads.

Saitech’s specifications for the Supermicro B300 AI Server include eight NVIDIA Blackwell HGX B300 GPUs (NVL8), dual AMD EPYC processors, up to 6 TB of DDR5 Error-Correcting Code (ECC) memory, PCI Express Gen5 (PCIe Gen5), hot-swappable Non-Volatile Memory Express (NVMe) storage, and integrated networking up to 800 GbE for multi-node AI clusters. For data center deployment, Saitech reports Supermicro offers high-density chassis options, including direct liquid-cooling configurations, to increase sustained performance, power efficiency, and rack-level GPU density.

Saitech also describes operational features intended for continuous operation, including redundant Titanium-level power supplies, advanced thermal designs with air and liquid-cooling options, enterprise-grade Baseboard Management Controller (BMC) management and security features, and rack-scale optimization for data center integration. The company adds that Blackwell performance-per-watt improvements can reduce operating costs for long-running training and inference workloads.

For target workloads, Saitech positions the platform for “AI factories” and production AI pipelines, including autonomous and agentic AI systems, multimodal workloads across text, vision, video, and audio, and distributed training plus always-on inference services. Saitech also claims the B300 platform supports multi-trillion-parameter training, low-latency inference, and scalable AI services. The company states select B300 configurations are in stock and ready to ship for qualified customers.

Source: Saitech

Get Data Center Engineering News In Your Inbox:

Popular Posts:

695fcac850f073b041e711a2_karman-p-3200 copy
Karman launches 10 MW Heat Processing Unit for giga-scale AI data center cooling
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
68e79d30a17eea847251fae6_img-home-product-liquidjet-main
Frore Systems updates LiquidJet direct-to-chip coldplate for 1,950 W NVIDIA Rubin data center GPUs
1765906506220
Tritium launches 800 VDC bidirectional inverter for data centers and renewable energy sites
Grafika3-scaled copy
DCX announces 8.15 MW coolant distribution unit for 45°C warm-water cooling in AI data centers

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: