Mirantis OpenStack for Kubernetes 25.2 improves disconnected operations and GPU workload support for data centers

Mirantis has announced the availability of Mirantis OpenStack for Kubernetes (MOSK) version 25.2, now supporting enhanced GPU workload orchestration and improved disconnected operations for data centers. This update is intended to help operators meet increasing infrastructure requirements for artificial intelligence (AI), including high-throughput model training and confidential data handling.

MOSK 25.2 allows OpenStack clouds to operate entirely offline, supporting strict regulatory environments such as finance, government, and defense where networks must be isolated and external artifacts must undergo security review before entry into the data center. Mirantis says these disconnected operations help organizations align with upstream innovation while maintaining tight control over sensitive data—key for AI model development and maintaining data sovereignty.

The release features technical updates that include:

  • Support for OpenStack 2025.1 “Epoxy” for new deployments and upgrades from the previous 2024.1 “Caracal” release.
  • Open Virtual Network (OVN) 24.03 integration, enabling performance enhancements, latest security patches, and providing a migration path from Open vSwitch to a more modern and scalable model.
  • Alternative availability of OpenSDN 24.1 with a modern codebase and expanded IPv6 support.
  • Full Layer 3 networking on bare metal for horizontal scaling across racks without virtual LAN stretch, paired with infrastructure monitoring for proactive issue detection in switches or routing.
  • Support for hybrid AI infrastructure, allowing recovery of bare-metal GPU servers even during networking issues and enabling their integration with virtual machines for high-performance AI training.

The platform continues to offer full lifecycle management for on-premises private clouds, unifying bare-metal provisioning, software configuration, centralized logging, monitoring, and alerting for both AI-focused and traditional workloads.

Artem Andreev, Senior Engineering Manager at Mirantis, said, “AI workloads mean big changes to general-purpose compute infrastructure,” adding, “With the latest MOSK, organizations can scale GPU-powered workloads, along with the ability to support secure, disconnected operations that don’t sacrifice openness or flexibility.”

Source: Mirantis

Get Data Center Engineering News In Your Inbox:

Popular Posts:

Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
1600x1600_1
DCX announces 8.15 MW facility-scale CDU for 45 C warm-water AI data center cooling
pr429-10kw
Navitas ships a 10 kW 800 V-to-50 V DC-DC platform for high-voltage DC AI data center power
hybrid-power-stabilizer
Prevalon launches Hybrid Power Stabilizer for AI data center power stabilization
pr434-option-d-1
Navitas launches fifth-generation 1,200 V SiC TAP MOSFET platform for AI data center power

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: