VDURA has announced Version 12 of its Data Platform, targeting AI and high-performance computing (HPC) environments with new features for scalability, performance, and resilience. The company claims v12 delivers upgrades in aggregate throughput, metadata scaling, and storage efficiency, specifically addressing the needs of hyperscale operators and large-scale data centers.
The release introduces an elastic Metadata Engine that scales dynamically across nodes, supporting billions of files and objects under active use, and eliminating bottlenecks as file and object counts grow. VDURA reports this delivers up to 20 times faster metadata operations compared to previous versions.
System-wide Snapshot Support is also included in v12, allowing for instantaneous, space-efficient, point-in-time dataset copies. These snapshots can be operated manually or through defined policies, supporting AI pipelines, model checkpoints, and operational recovery requirements.
To further optimize capacity for data centers, VDURA adds support for Shingled Magnetic Recording (SMR) hard drives, integrating a new write-placement engine that organizes sequential zones. VDURA claims this enables 25 to 30 percent more capacity per rack, while maintaining data integrity and throughput, extending the platform’s appeal for hyperscale storage and cold data tiers.
The platform’s aggregate throughput performance is reported to increase by over 20 percent, improving efficiency for training, inference, and bulk data movement. v12 will be generally available in the second quarter of 2026 for V5000 and V7000-class systems, with in-place upgrades for existing customers on version 11 and later.
“With VDURA v12, we’re increasing throughput performance by more than 20 percent, accelerating metadata operations by up to 20X, and reducing cost per terabyte by over 20 percent,” said Ken Claffey, CEO of VDURA. “At the same time, we’re simplifying data protection with new snapshot capabilities that make backup and recovery effortless at any scale.”
Source: VDURA







