Qumulo launches Helios AI Agent, Cloud AI Accelerator, and AI Networking for data center AI workflows

Qumulo has announced three new solutions—Helios AI Agent, Cloud AI Accelerator, and AI Networking—targeted at improving AI-driven data management in enterprise and data center environments. According to Qumulo, these innovations aim to increase oversight, scalability, and performance for the management and movement of unstructured data across hybrid, edge, and cloud environments.

The Qumulo Helios AI Agent is designed to provide autonomous operations within enterprise data systems by integrating system-wide telemetry. Built into Qumulo’s Data Operating System, Helios processes billions of operational events daily across hybrid and multi-cloud infrastructures. It detects emerging anomalies, predicts capacity and performance issues before they impact operations, and generates prescriptive recommendations or automated remediation workflows. Qumulo states that Helios supports multi-cloud participation (MCP), allowing external orchestration frameworks and partner applications to integrate with its data platform reasoning fabric.

The Qumulo Cloud AI Accelerator is intended to provide high-speed data movement for AI and analytics workloads between enterprise data centers and public cloud environments. By leveraging Qumulo’s NeuralCache technology, the solution offers predictive data prefetching and accelerated streaming between on-premises, edge, and cloud. Qumulo claims it dynamically optimizes data paths and connects exabyte-scale datasets to major hyperscale compute environments, including AWS, Azure, Google Cloud, and Oracle Cloud Infrastructure. According to Qumulo, this reduces cloud replication overhead and ensures timely delivery of datasets for AI training, inference, or rendering.

Qumulo AI Networking introduces natively supported high-performance data movement protocols for accelerated computing, including Remote Direct Memory Access (RDMA), RDMA over Converged Ethernet v2 (RoCEv2), and Network File System (NFS) over RDMA, with Simple Storage Service (S3) over RDMA currently in development. These protocols are intended to deliver near-memory bandwidth between Qumulo storage and GPU-based compute clusters, including NVIDIA DGX and AMD Instinct systems, reducing latency and CPU consumption for large-scale AI workloads. Qumulo emphasizes that this unifies traditionally separate storage and compute environments into a single high-performance domain.

Initial access to these capabilities is available in a preview program, with broad general availability expected in the next quarter. Qumulo will demonstrate these solutions at SC25 at booth 4407.

Source: Qumulo

Get Data Center Engineering News In Your Inbox:

Popular Posts:

695fcac850f073b041e711a2_karman-p-3200 copy
Karman launches 10 MW Heat Processing Unit for giga-scale AI data center cooling
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
1600x1600_1
DCX announces 8.15 MW facility-scale CDU for 45 C warm-water AI data center cooling
Grafika3-scaled copy
DCX announces 8.15 MW coolant distribution unit for 45°C warm-water cooling in AI data centers
Multiple_Stack_with_Calipe_with_Light_Streak_2
Wolfspeed produces first 300mm silicon carbide wafer to boost data center power and cooling efficiency for AI servers

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: