MSI has launched the XpertStation WS300 on NVIDIA DGX Station architecture, a deskside system aimed at large language models, generative AI, and data science workloads. The system is built around the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip and is available to order starting today.
MSI lists up to 748 GB of “large coherent memory,” combining HBM3e GPU memory and LPDDR5X CPU memory into a unified domain intended to improve CPU-to-GPU data sharing during large-model training and fine-tuning. For networking, XpertStation WS300 includes dual 400GbE ports using an NVIDIA ConnectX-8 SuperNIC, for up to 800 Gb/s of aggregate bandwidth to support distributed AI workloads and multi-node scaling.
On the I/O side, MSI calls out high-speed PCIe Gen5 and Gen6 NVMe storage support, positioning the box for fast dataset ingest and AI data pipelines. It also supports the NVIDIA AI Software Stack as an integrated hardware-and-software platform spanning desktop development through data center deployment.
Deskside DGX-class systems are a direct response to a practical workflow problem: teams often want a local system that behaves like the larger cluster environment they’ll ultimately deploy to, without fighting WAN latency or pushing sensitive datasets into external infrastructure. But the real test is whether the platform’s memory coherence and I/O keep GPUs fed under sustained training and inference, because underutilized accelerators get expensive fast.
MSI says the system can be used as a centralized AI compute node for collaborative fine-tuning and on-demand deployment, with an emphasis on keeping proprietary data and intellectual property under the organization’s control. “With NVIDIA, we are defining the next era of AI infrastructure, bridging centralized performance and distributed innovation, and enabling organizations to move from experimentation to production with greater speed, scale, and confidence,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions.
MSI also highlights NVIDIA NemoClaw, describing it as an open-source stack that installs OpenShell runtime with a policy-controlled sandbox intended to let autonomous AI agents run more safely on an always-on basis. MSI says developers running OpenShell on XpertStation WS300 can run trillion-parameter models locally with up to 20 petaFLOPS of AI compute and 748 GB of memory.
Source: MSI













