DigitalOcean adds AMD Instinct MI350X GPU Droplets for production AI inference

DigitalOcean has announced new GPU Droplets powered by AMD Instinct MI350X GPUs for its Agentic Inference Cloud, positioning the instances for lower-latency and higher-throughput inference on complex models. It also said it plans to deploy AMD Instinct MI355X GPUs in the next quarter, which will add liquid-cooled racks to its offering and expand access to accelerators aimed at larger datasets and models.

DigitalOcean says the AMD Instinct MI350X Series is built on the AMD CDNA 4 architecture and targets generative AI and high-performance computing workloads. It notes the GPUs support training massive AI models, high-speed inference, and complex high-performance computing workloads such as scientific simulations, data processing, and computational modeling. DigitalOcean also says the platform can optimize the compute-bound prefill phase while enabling low-latency inference and high token-generation throughput, with support for loading large models and larger context windows and enabling higher inference request density per GPU.

“These results demonstrate that the DigitalOcean Agentic Inference Cloud isn’t just about providing raw compute, but about delivering the operational efficiency, inference optimizations, and scale required for demanding AI builders,” said Vinay Kumar, Chief Product and Technology Officer at DigitalOcean. “The availability of the AMD Instinct™ MI350X GPUs, combined with DigitalOcean’s inference optimized platform offers our customers a boost in performance and the massive memory capacity needed to run the world’s most complex AI workloads while delivering compelling unit economics.”

DigitalOcean cited earlier results from optimizing AMD Instinct GPUs for Character.AI, saying it delivered two-times production request throughput and a 50 percent reduction in inference costs. It also pointed to ACE Studio as a customer building on AMD Instinct MI350X GPUs for complex inference workloads while managing costs. GPU Droplets powered by AMD Instinct MI350X are available in DigitalOcean’s Atlanta region data center.

Beyond the GPU hardware, DigitalOcean says it is emphasizing operational packaging: transparent usage-based pricing with flexible contracts and no hidden fees; provisioning and configuration “in just a few clicks” for security, storage, and networking requirements; and access to enterprise features including enterprise-grade Service Level Agreements, observability features, and HIPAA-eligible and SOC 2 compliant offerings.

Source: DigitalOcean

Get Data Center Engineering News In Your Inbox:

Popular Posts:

Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
1600x1600_1
DCX announces 8.15 MW facility-scale CDU for 45 C warm-water AI data center cooling
pr429-10kw
Navitas ships a 10 kW 800 V-to-50 V DC-DC platform for high-voltage DC AI data center power
hybrid-power-stabilizer
Prevalon launches Hybrid Power Stabilizer for AI data center power stabilization
pr434-option-d-1
Navitas launches fifth-generation 1,200 V SiC TAP MOSFET platform for AI data center power

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: