Tachyum open sources 281GB/s TDIMM to boost data center server performance and reduce power costs

Tachyum has disclosed details of its open source DDR5 memory module, TDIMM, designed to meet the growing computational and memory demands of artificial intelligence (AI) and data center workloads. According to Tachyum, TDIMM aims to enable AI models with parameter counts far exceeding current solutions, while reducing both cost and power requirements as compared to existing memory technology.

Tachyum reports that the TDIMM module delivers a bandwidth of 281 GB/s—an increase of 5.5 times over standard DDR5 registered DIMMs (RDIMMs), which offer 51 GB/s. Each Prodigy Ultimate processor can leverage 24 channels, reaching a total bandwidth of 6.7 TB/s, representing an 11-fold increase over conventional 12-channel CPUs. The TDIMM can support 256 GB (standard), 512 GB (tall), or up to 1 TB (extra tall) per module, with high-capacity configurations using through-silicon vias to reach as much as 3 PB per node.

The TDIMM employs a 484-pin connector with 128-bit data width and 16-bit error correction code, compared to DDR5 RDIMM’s 288-pin, 64-bit data configuration. This achieves double the memory bandwidth with only a 38 percent increase in signal count. The TDIMM design enables a 10 percent reduction in DRAM chips per module and is expected to lower module cost by 10 percent. Physical dimensions remain compatible with DDR5 RDIMM and MRDIMM, allowing adoption in both existing and higher-density 3U servers. Contact pitch is unchanged, requiring only modified plastics rather than new connector designs.

On power, Tachyum notes that the increased bandwidth translates to about 30 percent higher power consumption for TDIMM modules compared to DDR5 RDIMM, but the gap is expected to close as newer DRAM chips are adopted. From an engineering perspective, upgrading to TDIMM requires minor changes to current DDR5 memory controllers and PHY, primarily expanding the data path from 80 to 144 bits. Existing buffer chips remain the same, paving the way for rapid commercialization. Looking forward, Tachyum claims similar evolutionary updates will offer up to 13.5 TB/s of memory bandwidth in 2027, with high-density AI configurations targeting as much as 27 TB/s in 2028.

Tachyum is open sourcing the TDIMM specification royalty-free, with the stated goal of accelerating global adoption, cost reduction, and enabling high-bandwidth memory manufacturing in regions that may have lagged in current memory technologies. For technical specifications and licensing inquiries, Tachyum has directed interested parties to its technical brief and contact links.

“The TDIMM is key in reducing the cost of AI systems trained on all the knowledge from $8 trillion and 276 gigawatts to $78 billion and 1 gigawatt in 2028,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “The TDIMM ushers in the era of affordable AI trained on all written knowledge produced by humanity, accessible to many companies and nations.”

Source: Tachyum

Get Data Center Engineering News In Your Inbox:

Popular Posts:

695fcac850f073b041e711a2_karman-p-3200 copy
Karman launches 10 MW Heat Processing Unit for giga-scale AI data center cooling
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
68e79d30a17eea847251fae6_img-home-product-liquidjet-main
Frore Systems updates LiquidJet direct-to-chip coldplate for 1,950 W NVIDIA Rubin data center GPUs
1765906506220
Tritium launches 800 VDC bidirectional inverter for data centers and renewable energy sites
Screenshot
HC Capital Partners and Herrmann Family Companies plan 1,500-plus-acre Energy Ranch power-linked data center campus in South Texas

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: