Qumulo has announced three new solutions—Helios AI Agent, Cloud AI Accelerator, and AI Networking—targeted at improving AI-driven data management in enterprise and data center environments. According to Qumulo, these innovations aim to increase oversight, scalability, and performance for the management and movement of unstructured data across hybrid, edge, and cloud environments.
The Qumulo Helios AI Agent is designed to provide autonomous operations within enterprise data systems by integrating system-wide telemetry. Built into Qumulo’s Data Operating System, Helios processes billions of operational events daily across hybrid and multi-cloud infrastructures. It detects emerging anomalies, predicts capacity and performance issues before they impact operations, and generates prescriptive recommendations or automated remediation workflows. Qumulo states that Helios supports multi-cloud participation (MCP), allowing external orchestration frameworks and partner applications to integrate with its data platform reasoning fabric.
The Qumulo Cloud AI Accelerator is intended to provide high-speed data movement for AI and analytics workloads between enterprise data centers and public cloud environments. By leveraging Qumulo’s NeuralCache technology, the solution offers predictive data prefetching and accelerated streaming between on-premises, edge, and cloud. Qumulo claims it dynamically optimizes data paths and connects exabyte-scale datasets to major hyperscale compute environments, including AWS, Azure, Google Cloud, and Oracle Cloud Infrastructure. According to Qumulo, this reduces cloud replication overhead and ensures timely delivery of datasets for AI training, inference, or rendering.
Qumulo AI Networking introduces natively supported high-performance data movement protocols for accelerated computing, including Remote Direct Memory Access (RDMA), RDMA over Converged Ethernet v2 (RoCEv2), and Network File System (NFS) over RDMA, with Simple Storage Service (S3) over RDMA currently in development. These protocols are intended to deliver near-memory bandwidth between Qumulo storage and GPU-based compute clusters, including NVIDIA DGX and AMD Instinct systems, reducing latency and CPU consumption for large-scale AI workloads. Qumulo emphasizes that this unifies traditionally separate storage and compute environments into a single high-performance domain.
Initial access to these capabilities is available in a preview program, with broad general availability expected in the next quarter. Qumulo will demonstrate these solutions at SC25 at booth 4407.
Source: Qumulo







