DigitalOcean has announced new GPU Droplets powered by AMD Instinct MI350X GPUs for its Agentic Inference Cloud, positioning the instances for lower-latency and higher-throughput inference on complex models. It also said it plans to deploy AMD Instinct MI355X GPUs in the next quarter, which will add liquid-cooled racks to its offering and expand access to accelerators aimed at larger datasets and models.
DigitalOcean says the AMD Instinct MI350X Series is built on the AMD CDNA 4 architecture and targets generative AI and high-performance computing workloads. It notes the GPUs support training massive AI models, high-speed inference, and complex high-performance computing workloads such as scientific simulations, data processing, and computational modeling. DigitalOcean also says the platform can optimize the compute-bound prefill phase while enabling low-latency inference and high token-generation throughput, with support for loading large models and larger context windows and enabling higher inference request density per GPU.
“These results demonstrate that the DigitalOcean Agentic Inference Cloud isn’t just about providing raw compute, but about delivering the operational efficiency, inference optimizations, and scale required for demanding AI builders,” said Vinay Kumar, Chief Product and Technology Officer at DigitalOcean. “The availability of the AMD Instinct™ MI350X GPUs, combined with DigitalOcean’s inference optimized platform offers our customers a boost in performance and the massive memory capacity needed to run the world’s most complex AI workloads while delivering compelling unit economics.”
DigitalOcean cited earlier results from optimizing AMD Instinct GPUs for Character.AI, saying it delivered two-times production request throughput and a 50 percent reduction in inference costs. It also pointed to ACE Studio as a customer building on AMD Instinct MI350X GPUs for complex inference workloads while managing costs. GPU Droplets powered by AMD Instinct MI350X are available in DigitalOcean’s Atlanta region data center.
Beyond the GPU hardware, DigitalOcean says it is emphasizing operational packaging: transparent usage-based pricing with flexible contracts and no hidden fees; provisioning and configuration “in just a few clicks” for security, storage, and networking requirements; and access to enterprise features including enterprise-grade Service Level Agreements, observability features, and HIPAA-eligible and SOC 2 compliant offerings.
Source: DigitalOcean







