Velaura AI has introduced Titan Core, a silicon design and IP platform aimed at reducing power consumption in AI accelerators. The company says Titan Core can deliver up to 2x lower overall chip power for AI accelerators, including up to 500 W of savings on a typical 1000 W GPU or XPU.
Velaura AI describes Titan Core as being based on patented digital design technology and validated across more than 30 million “leading-edge ASICs” in production deployments over multiple years. The company also says it has ongoing engagements with several hyperscaler XPU partners targeting 3 nm and 2 nm process nodes, and that it is in discussions to integrate Titan Core into next-generation AI accelerators.
Power is already the hard ceiling for many AI buildouts, and a claim like “500 W per accelerator” is the sort of number that matters immediately to facility planners. If those savings are real at scale, they don’t just reduce electrical demand; they can also ease thermal design and increase how much compute fits inside an existing power envelope.
Technically, Velaura AI says Titan Core targets the power consumed by matrix multiplication (MATMUL) operations. The company claims the platform reduces the energy required for those operations by 2–4x using proprietary circuit and library technology at advanced process nodes. Velaura AI says it starts from a customer’s RTL design, then applies proprietary libraries and a physical design methodology to deliver an optimized physical layout; the resulting design is intended to operate at a lower voltage while maintaining “full functional equivalence” and integrating into existing SoC architectures and design flows.
Rajiv Khemani, co-founder and CEO of Velaura AI, said: “Our initial engagements with top hyperscalers are validating Titan Core’s capability to dramatically reduce global AI data center power costs and improve AI sustainability.”
Patrick Moorhead, CEO and chief analyst at Moor Insights & Strategy, said: “That production track record at 3nm is what separates this from a whiteboard exercise. If the efficiency gains hold up under independent validation, the implications for data center TCO and deployable compute capacity are substantial.”
Dylan Patel, founder and CEO of SemiAnalysis, said: “By reducing the energy needed to generate AI tokens, Titan Core is a pragmatic solution that Velaura AI’s customers can use to drive efficiency, lower OPEX, and allow more of the total datacenter BOM to be spent on compute.”
Velaura AI said its offering includes low-voltage digital design libraries for advanced nodes (including 3 nm and 2 nm), custom EDA flows and circuit methodologies optimized for low-voltage design, and potential silicon area reduction depending on design targets.
Source: Velaura AI












