SambaNova details SN50 inference chip for data center-scale agentic AI

SambaNova has introduced its SN50 AI chip, announced a planned collaboration with Intel focused on AI inference infrastructure and cloud expansion, and reported more than $350 million in strategic Series E financing. SambaNova positions SN50 as a data center-oriented inference accelerator for “agentic AI,” and says the combination of new hardware, manufacturing expansion, and cloud capacity is intended to move autonomous AI agents into production with lower latency and lower total cost of ownership.

SambaNova says SN50 delivers five times more compute per accelerator and four times more network bandwidth than the previous generation. The company says the platform links up to 256 accelerators over a multi-terabyte-per-second interconnect to reduce time-to-first-token and support larger batch sizes, targeting higher-throughput, lower-latency inference for larger, longer-context models. SambaNova also says SN50 is built on its Reconfigurable Data Unit (RDU) architecture and includes a three-tier memory architecture intended to support “10T+ parameter models and 10M+ context lengths,” plus “resident multi-model memory and agentic caching” to optimize utilization and reduce cost-per-token. SambaNova says SN50 “uses existing power and is air cooled” and that it expects to ship the chip to customers later in 2026.

The press release calls out data center deployment directly: SoftBank Corp. is named as the first customer expected to deploy SN50 “within its next-generation AI data centers in Japan,” targeting low-latency inference services for sovereign and enterprise customers across Asia-Pacific. The release also says SoftBank already hosts SambaCloud for regional developers, and that anchoring new clusters on SN50 positions SambaNova as “the inference backbone” for SoftBank’s sovereign AI initiatives and future large-scale agentic services.

SambaNova and Intel say they have entered into a planned multi-year strategic collaboration aimed at “high-performance, cost-efficient AI inference solutions” for AI-native companies, model providers, enterprises, and government organizations. SambaNova says the collaboration includes scaling its vertically integrated AI cloud on Intel Xeon-based infrastructure optimized for large language and multimodal models, and combining SambaNova systems with Intel CPUs, accelerators, and networking technologies. Intel also plans to make a strategic investment in SambaNova to accelerate the rollout of an Intel-powered AI cloud, according to the release.

On funding, SambaNova says the “oversubscribed” Series E round was led by Vista Equity Partners and Cambium Capital, with participation from Intel Capital, and that proceeds will expand SN50 production, scale SambaCloud, and deepen enterprise software integrations. The release also cites demand across financial services, telecommunications, energy, and sovereign deployments.

Source: SambaNova

Get Data Center Engineering News In Your Inbox:

Popular Posts:

Elvis-Leka,-New-Product-Development-Engineer-—-Parker,-Sporlan-Division
From air to two-phase liquid: how rack cooling options compare on density and risk
Molex Incorporated CPO-Solution
Molex debuts high-radix optical circuit switch for AI cluster fabrics
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
SIP-PR_-_Product_Announcement_-_Web_Graphics-BusinessWire-4800X2700
Semtech ships 224G TIAs and drivers for LPO and co-packaged optics
Delta-Electronics-Americas
Delta demos 800 VDC AI data center power racks and 2.4MW CDUs at GTC

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: