Micron is in high-volume production of three products aimed at NVIDIA’s Vera Rubin platform: HBM4 36 GB 12H, a PCIe Gen6 data center SSD (the Micron 9650), and a 192 GB SOCAMM2 module. Together, the parts target the bandwidth, capacity, and power constraints that show up fast in dense AI training and inference infrastructure.
For memory on the accelerator side, Micron’s HBM4 36 GB 12H is designed for NVIDIA Vera Rubin. Micron lists pin speeds over 11 Gb/s, translating to greater than 2.8 TB/s of bandwidth, and it describes a greater than 20% power-efficiency improvement versus its HBM3E in the same 36 GB 12H stack height comparison.
Micron also says it has shipped samples of an HBM4 48 GB 16H stack to customers, using 16-die stacking. The company characterizes that 48 GB cube as a 33% increase in capacity per HBM placement compared to the HBM4 36 GB 12H.
On the storage side, Micron says it’s the first company to mass-produce a PCIe Gen6 data center SSD, the Micron 9650. Micron positions the drive for agentic AI workloads on NVIDIA BlueField-4 STX architecture, and it calls out liquid-cooled environments as a design target. Performance figures listed for the 9650 include up to 28 GB/s sequential read throughput and up to 5.5 million random read IOPS. Micron also states the 9650 delivers up to two times the read performance of Gen5 at 100% higher performance per watt.
Micron’s third high-volume product called out is 192 GB SOCAMM2, part of a broader SOCAMM2 portfolio spanning 48 GB to 256 GB capacities. Micron says SOCAMM2 is designed for NVIDIA Vera Rubin NVL72 systems and standalone NVIDIA Vera CPU platforms, enabling up to 2 TB of memory and 1.2 TB/s of bandwidth per CPU.
There’s a practical through-line here: HBM bandwidth and capacity set the ceiling for accelerator-side utilization, while storage throughput and efficiency increasingly gate data movement when clusters are tuned for high request concurrency. But operators will still want to validate what the Gen6 SSD performance-per-watt claim looks like under their own queue depths, temperatures, and cooling architectures.
“Our close collaboration with NVIDIA ensures that compute and memory are designed to scale together from day one,” said Sumit Sadana, executive vice president and chief business officer at Micron Technology. He added, “With HBM4 36GB 12H, alongside the industry’s first SOCAMM2 and Gen6 SSD now in high-volume production, Micron’s memory and storage form a core foundation that unlocks the full potential of next-generation AI.”
Source: Micron.






