DDN storage appliance achieves top results in MLPerf Storage benchmark for AI data centers

DDN has reported that its new AI400X3 storage appliance delivered high performance and efficiency in the recent MLPerf Storage v2.0 benchmark, setting what it claims is a new benchmark in AI infrastructure. The AI400X3, powered by DDN’s EXAScaler parallel file system, is designed for large-scale AI workloads in data centers, promising substantial throughput with a compact 2U, 2400-watt chassis.

DDN says the AI400X3 was evaluated in both single-node and multi-node categories to simulate a range of deployment scenarios, from early-stage projects to distributed AI training environments. In these MLPerf tests, the appliance served simulated H100 GPU loads, demonstrating its suitability for data center AI clusters, including those at hyperscale and research facilities.

According to the results released by DDN, the AI400X3 achieved the following in MLPerf Storage 2025:

  • In single-node benchmarks, the appliance achieved the highest performance density on Cosmoflow and Resnet50 training, serving 52 and 208 simulated H100 GPUs, respectively, within a 2U, 2400-watt form factor.
  • Input/output performance reached 30.6 GB/s reads and 15.3 GB/s writes. This yielded checkpoint load and save times for Llama3-8b of 3.4 and 7.7 seconds, respectively.
  • In multi-node results, the AI400X3 sustained over 120 GB/s read throughput for Unet3D H100 training, supported up to 640 simulated H100 GPUs on ResNet50, and up to 135 simulated H100 GPUs on Cosmoflow. DDN notes this as a twofold improvement over last year’s results.

 

DDN emphasizes that the AI400X3’s compact 2U size and low power requirements address data center constraints related to space, power consumption, and cooling. The benchmark results, says DDN, validate the appliance’s ability to keep GPUs fully utilized with rapid and reliable data access, reducing training times while enabling regular checkpointing. DDN also highlights its long-standing partnership powering internal AI clusters for Nvidia since 2016, reflecting ongoing use in real-world, high-performance workloads.

Sven Oehme, CTO at DDN, stated, “AI at scale demands more than brute force—it requires precision-engineered infrastructure that can deliver relentless performance, efficiency, and reliability,” said Sven Oehme, CTO at DDN. “With the AI400X3, we’ve achieved exactly that. These MLPerf results prove that DDN can keep pace with—and even outpace—the world’s most advanced GPUs, all within a compact, power-efficient footprint.”

Source: DDN

Get Data Center Engineering News In Your Inbox:

Popular Posts:

logo_on-white
Vultr introduces AMD Instinct MI355X GPUs to enhance AI workload performance and efficiency in data centers
Keysight EDA and Intel Foundry
Keysight and Intel Foundry announce collaboration on EMIB-T packaging technology for data center chiplet designs
pr-image-forcadence-and-samsung-foundry-expand-collaboration-to-accelerate-ai-data-center-chip-design-and-power-optimization-jpg
Cadence and Samsung Foundry expand collaboration to accelerate AI data center chip design and power optimization
GmyybkpD3o7f45xka2pctu42nog
FS launches high-density 1U WDM platform for data centers, maximizing rack space and passive cooling
Grid8-graph
Grid8 launches free interconnection dashboard for US data center and energy operators

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: