Credo has introduced Weaver, its new memory fanout gearbox and the first product in the OmniConnect lineup, designed to address memory bottlenecks in artificial intelligence (AI) inference systems and data center deployments. The company reports that Weaver is engineered to boost both memory bandwidth and density, targeting scalability and efficiency for high-performance compute clusters and next-generation data center hardware.
According to Credo, Weaver leverages its proprietary 112G very short reach (VSR) Serializer/Deserializer (SerDes) technology. This architecture increases input/output (I/O) density by up to 10 times, supporting up to 6.4 TB of memory and 16 TB/s bandwidth using standard LPDDR5X memory. These technical parameters are intended to surpass conventional memory architectures and address the known limitations of low-power double data rate (LPDDR5X) and graphics double data rate (GDDRX) solutions, while avoiding high cost and availability constraints typical of high-bandwidth memory (HBM).
The product supports adaptive memory configurations using flexible dynamic random access memory (DRAM) packaging and late binding, which Credo says enables system integrators and operators to match systems to evolving AI model requirements without sacrificing throughput or density. Weaver is also designed for straightforward migration to next-generation memory protocols, increasing its relevance for future deployments, and integrates telemetry and diagnostics features to improve system uptime and reliability.
For data center operators and technology vendors building large-scale AI workloads or hyperscale servers, Credo identifies Weaver as an enabling component for maximizing memory access in high-density AI inference scenarios. The company emphasizes that Weaver is available for design-in immediately with general release scheduled for the second half of 2026.
“Weaver is designed to deliver the flexibility and scalability required for future AI inference systems,” said Don Barnetson, Senior Vice President, Product at Credo. “This innovation empowers our partners to optimize memory provisioning, reduce costs, and accelerate deployment of advanced AI workloads.”
“The future of AI acceleration requires efficiency at all levels and innovative technology to process extremely large workloads,” said Mitesh Agrawal, CEO of Positron. “Credo’s Weaver is instrumental in helping us solve our toughest memory challenges, enabling us to deliver the high-performance compute power for our next generation of AI inference servers.”
Source: Credo







