DDN storage appliance achieves top results in MLPerf Storage benchmark for AI data centers

DDN has reported that its new AI400X3 storage appliance delivered high performance and efficiency in the recent MLPerf Storage v2.0 benchmark, setting what it claims is a new benchmark in AI infrastructure. The AI400X3, powered by DDN’s EXAScaler parallel file system, is designed for large-scale AI workloads in data centers, promising substantial throughput with a compact 2U, 2400-watt chassis.

DDN says the AI400X3 was evaluated in both single-node and multi-node categories to simulate a range of deployment scenarios, from early-stage projects to distributed AI training environments. In these MLPerf tests, the appliance served simulated H100 GPU loads, demonstrating its suitability for data center AI clusters, including those at hyperscale and research facilities.

According to the results released by DDN, the AI400X3 achieved the following in MLPerf Storage 2025:

  • In single-node benchmarks, the appliance achieved the highest performance density on Cosmoflow and Resnet50 training, serving 52 and 208 simulated H100 GPUs, respectively, within a 2U, 2400-watt form factor.
  • Input/output performance reached 30.6 GB/s reads and 15.3 GB/s writes. This yielded checkpoint load and save times for Llama3-8b of 3.4 and 7.7 seconds, respectively.
  • In multi-node results, the AI400X3 sustained over 120 GB/s read throughput for Unet3D H100 training, supported up to 640 simulated H100 GPUs on ResNet50, and up to 135 simulated H100 GPUs on Cosmoflow. DDN notes this as a twofold improvement over last year’s results.

 

DDN emphasizes that the AI400X3’s compact 2U size and low power requirements address data center constraints related to space, power consumption, and cooling. The benchmark results, says DDN, validate the appliance’s ability to keep GPUs fully utilized with rapid and reliable data access, reducing training times while enabling regular checkpointing. DDN also highlights its long-standing partnership powering internal AI clusters for Nvidia since 2016, reflecting ongoing use in real-world, high-performance workloads.

Sven Oehme, CTO at DDN, stated, “AI at scale demands more than brute force—it requires precision-engineered infrastructure that can deliver relentless performance, efficiency, and reliability,” said Sven Oehme, CTO at DDN. “With the AI400X3, we’ve achieved exactly that. These MLPerf results prove that DDN can keep pace with—and even outpace—the world’s most advanced GPUs, all within a compact, power-efficient footprint.”

Source: DDN

Get Data Center Engineering News In Your Inbox:

Popular Posts:

695fcac850f073b041e711a2_karman-p-3200 copy
Karman launches 10 MW Heat Processing Unit for giga-scale AI data center cooling
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
1600x1600_1
DCX announces 8.15 MW facility-scale CDU for 45 C warm-water AI data center cooling
Grafika3-scaled copy
DCX announces 8.15 MW coolant distribution unit for 45°C warm-water cooling in AI data centers
Multiple_Stack_with_Calipe_with_Light_Streak_2
Wolfspeed produces first 300mm silicon carbide wafer to boost data center power and cooling efficiency for AI servers

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: