Numem introduces AI Memory Engine to eliminate data center memory bottlenecks and reduce power consumption

Numem has announced its AI Memory Engine, a fully synthesizable memory subsystem IP designed to address traditional memory performance limitations for artificial intelligence (AI) workloads in data centers and edge environments. According to Numem, the AI Memory Engine significantly improves power efficiency, performance, and endurance for emerging memory technologies, including Magnetoresistive RAM (MRAM), Resistive RAM (RRAM), Phase Change RAM (PCRAM), and Flash memory.

Specifically optimized for embedded applications, the AI Memory Engine integrates with Numem’s patented MRAM architecture, enabling SRAM-level performance while achieving densities up to 2.5 times higher and standby power consumption 100 times lower than traditional memory solutions. Numem says its MRAM supports die densities up to 1 GB, providing scalability and integration flexibility in standard foundry environments.

“Every week, I hear the same thing from customers: their memory can’t keep up,” said Max Simmons, CEO of Numem. “Not enough performance, not enough density, and way too much power consumption. AI workloads are pushing existing architectures to the limit – especially in areas like automotive, where in-vehicle infotainment (IVI) systems now rely on multiple cameras and real-time AI. DRAM just isn’t cutting it. It’s too slow to boot, consumes too much power, and simply can’t meet the performance demands of modern systems.”

Numem claims the AI Memory Engine achieves between 30% and 50% power savings compared to existing high-bandwidth memory technologies, translating into reduced operating costs and lower energy usage. Its flexible power management architecture supports multiple power modes, enabling seamless integration into data center and edge infrastructure without extensive hardware redesign.

The company’s approach leverages a fabless model, offering foundry-ready IP and silicon to simplify deployment for a variety of AI-driven use cases, including automotive and custom system-on-chip (SoC) designs. According to Numem, this positions the company strategically within the emerging MRAM market, projected by Polaris Market Research to reach a total addressable market of $25.1 billion by 2030.

Source: Numem

Get Data Center Engineering News In Your Inbox:

Popular Posts:

695fcac850f073b041e711a2_karman-p-3200 copy
Karman launches 10 MW Heat Processing Unit for giga-scale AI data center cooling
Screenshot
Five AI data centers to reach 1 GW power capacity in 2026, new analysis shows
1600x1600_1
DCX announces 8.15 MW facility-scale CDU for 45 C warm-water AI data center cooling
Grafika3-scaled copy
DCX announces 8.15 MW coolant distribution unit for 45°C warm-water cooling in AI data centers
Multiple_Stack_with_Calipe_with_Light_Streak_2
Wolfspeed produces first 300mm silicon carbide wafer to boost data center power and cooling efficiency for AI servers

Share Your Data Center Engineering News

Do you have a new product announcement, webinar, whitepaper, or article topic? 

Get Data Center Engineering News In Your Inbox: