ASUS launches scalable data center storage and server solutions for AI, HPC and enterprise workloads

ASUS has announced a new portfolio of enterprise storage and high-performance computing (HPC) infrastructure solutions at Supercomputing 2025 (SC25). According to ASUS, this offering addresses the demands of modern data centers supporting artificial intelligence (AI), HPC, and memory-intensive workloads with a complete range of storage platforms and GPU-powered servers engineered for data-driven environments.

The storage lineup includes block, file, object, and software-defined systems. Key solutions highlighted are:

  • VS320D-RS12 Block Storage: Utilizes an active-active controller with Intel Xeon processors, allowing up to 7.1 PB of scalable capacity, SSD caching, and auto-tiering. ASUS claims this is well-suited for virtualization and business-critical workloads.
  • VS320D-RS12U Unified Storage: Supports multiple protocols, with built-in SSD caching, auto-tiering, S3 cloud synchronization, disaster recovery, and data reduction. Intel Xeon dual controllers ensure rapid and secure data access.
  • OJ340A-RS60A Object Storage: An AMD platform providing S3 and Swift compatibility via a Ceph framework, with NVMe acceleration for AI data lakes, analytics, and exabyte-scale archiving.
  • VS320D-RS12J JBOD Expansion: Offers up to 7.1 PB in 78- or 12-bay configurations, 12 Gb/s SAS 3.0 interfaces, redundant power, and multipath connectivity for density-focused environments.
  • AI/HPC Storage Platform: Developed with partners such as WEKA, IBM, VAST Data, and Hammerspace, this AMD EPYC platform supports software-defined storage optimized for GPU-accelerated AI and HPC applications, emphasizing unified data management and low latency.

The centerpiece of the server lineup is the XA AM3A-E13 platform, featuring eight AMD Instinct MI355X GPUs and dual AMD EPYC 9005 processors. ASUS states the system delivers expanded low-precision data support (including FP4 and FP6), 288 GB of high-bandwidth memory, up to 8 TB/s memory bandwidth, and direct GPU-to-GPU interconnect in a modular 10U enclosure. Engineered for efficient training and inference of large AI models, it is aimed at both AI and HPC workloads.

Other notable platforms include the RS520QA-E13-RS8U—a 2U four-node server supporting Compute Express Link (CXL) 2.0 for shared memory and low-latency access for AI and HPC environments. Additional servers such as the RS720A-E13-RS8U and RS700A-E13-RS12U bring dual-socket, PCI Express 5.0 expansion, and large memory capacity for virtualization and enterprise computing.

ASUS also presented Intel-based offerings, including the ESC8000-E12P AI server, which supports up to eight dual-slot GPUs—such as Intel Gaudi 3 PCI Express accelerators with 128 GB high-bandwidth memory per accelerator and 3.7 TB/s bandwidth. PCI Express 5.0 architecture and integrated Ethernet enable scale-out for enterprise AI and retrieval augmented generation (RAG) workloads. The RS720-E12-RS24U and RS700-E12-RS4U platforms are built for balanced performance and high I/O density using Intel Xeon 6 processors.

Source: ASUS

Get Data Center Engineering News In Your Inbox:

Popular Posts:

1765906506220
Tritium launches 800 VDC bidirectional inverter for data centers and renewable energy sites
Water_From_Air_Data_Center_Rendering_Nov_2025
iMasons selects Water From Air for data center waste heat-to-water innovation
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
68e79d30a17eea847251fae6_img-home-product-liquidjet-main
Frore Systems updates LiquidJet direct-to-chip coldplate for 1,950 W NVIDIA Rubin data center GPUs
River Bend Rendering
Hut 8 announces $7 billion, 245 MW hyperscale data center IT lease with Fluidstack backed by Google

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: