Prodigy Universal Processor claims major AI and data center performance gains

Tachyum has released technical details of its Prodigy Universal Processor built using a 2 nanometer process, stating it delivers significant AI and data center performance improvements. The Prodigy Ultimate processor, according to Tachyum, is capable of supporting AI model parameters magnitudes larger than current solutions, while reducing costs and power requirements for hyperscale and enterprise data centers.

Tachyum reports that Prodigy Ultimate achieves up to 21.3 times higher AI rack performance compared to NVIDIA’s Rubin Ultra NVL576. Its custom DDR5 DIMM, called TDIMM, increases memory bandwidth 5.5 times—up from 51 GB/s to 281 GB/s. The processor’s 24 memory channels provide a total of 8 GB/s, representing an 8.2 times improvement over Tachyum’s previous design and an 11 times increase over current 12-channel CPUs. TDIMM modules are offered in capacities of 256 GB (standard), 512 GB (tall), and 1 TB (extra tall), with Through Silicon Via (TSV) stacking scaling node memory up to 2 x 96 TB per processor, or 3 petabytes per node.

For data center operators running large AI workloads, Tachyum highlights the TAI data types, which reduce required memory bandwidth by up to four times. This enables TAI inference performance equivalent to systems with up to 27 TB/s of memory bandwidth. In a 16-socket node configuration, Prodigy provides 16 GB of cache—130 times more than NVIDIA B300’s 126 MB—further reducing memory bottlenecks for large-scale inference and training applications.

Cost and energy efficiency comparisons presented by Tachyum suggest that solutions using Prodigy Ultimate could require an order of magnitude less investment and energy than NVIDIA B300 alternatives for extremely large AI models. For example, the company claims 180,000 Prodigy Ultimate processors (with extra tall TDIMMs) could handle next-generation AI models for an estimated $9 billion in memory costs and 540 megawatts of power—reportedly 100 times less than the $3 trillion and 250 gigawatts projected for comparable configurations based on NVIDIA hardware.

“With Prodigy Ultimate, the unsustainable $3 trillion datacenter consuming 250 gigawatts of power can be shrunk to a $27 billion consuming 540 megawatts in 2028 instead of 2033,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “With Prodigy Ultimate, humanity can move to an era of AI trained on all written knowledge produced by mankind.”

Interested technical professionals can download further documentation on Prodigy’s architecture and features at Tachyum’s website.

Source: Tachyum

Get Data Center Engineering News In Your Inbox:

Popular Posts:

695fcac850f073b041e711a2_karman-p-3200 copy
Karman launches 10 MW Heat Processing Unit for giga-scale AI data center cooling
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
1600x1600_1
DCX announces 8.15 MW facility-scale CDU for 45 C warm-water AI data center cooling
Grafika3-scaled copy
DCX announces 8.15 MW coolant distribution unit for 45°C warm-water cooling in AI data centers
Multiple_Stack_with_Calipe_with_Light_Streak_2
Wolfspeed produces first 300mm silicon carbide wafer to boost data center power and cooling efficiency for AI servers

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: