FS launches PicOS AI switch system for high-density data center networking

FS has announced its PicOS AI Switch System, a data center networking solution designed for large-scale artificial intelligence (AI) training, inference, and high-performance computing (HPC) workloads. The system integrates Broadcom Tomahawk series switching chips, FS’s PicOS network operating system, and the AmpCon-DC management platform.

According to FS, the PicOS AI Switch System is engineered for lossless RDMA over Converged Ethernet version two (RoCEv2) networking, ultra-low latency, and intelligent traffic optimization. The solution is designed to maximize GPU efficiency and deliver reliable cluster performance in high-density data center environments.

The PicOS AI Switch portfolio includes 400 gigabit (400G) and 800 gigabit (800G) models, featuring Broadcom Tomahawk 3, 4, and 5 series chips. FS says these switches provide high bandwidth, deterministic performance, and scalable connectivity, meeting the demands of AI training, inference, and HPC deployments.

FS highlights features such as redundant architecture, deep buffers, and advanced congestion management to ensure lossless performance and operational resilience. The AmpCon-DC platform enables deployment, configuration, and lifecycle management of large GPU-based networks, supporting faster scaling and reducing operational complexity for data center operators.

The main application for this system, as explicitly stated by FS, is in building scalable, high-performance GPU clusters for data centers involved in AI and accelerated computing projects. Additional applications noted by FS outside the core data center market include the broader enterprise, telecom, and HPC sectors.

Source: FS

Get Data Center Engineering News In Your Inbox:

Popular Posts:

695fcac850f073b041e711a2_karman-p-3200 copy
Karman launches 10 MW Heat Processing Unit for giga-scale AI data center cooling
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
1600x1600_1
DCX announces 8.15 MW facility-scale CDU for 45 C warm-water AI data center cooling
Grafika3-scaled copy
DCX announces 8.15 MW coolant distribution unit for 45°C warm-water cooling in AI data centers
Multiple_Stack_with_Calipe_with_Light_Streak_2
Wolfspeed produces first 300mm silicon carbide wafer to boost data center power and cooling efficiency for AI servers

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: